resourceone.info Technology Information Technology Pdf

INFORMATION TECHNOLOGY PDF

Saturday, August 10, 2019


PDF Drive is your search engine for PDF files. As of today we have 80,, eBooks for you to download for free. No annoying ads, no download limits, enjoy . 1. Introduction 2. What is Information? 3. Information, the Life Blood 4. What is Information Technology? 5. Organisation and Structure 6. Computers in Use 7. Section Information Technology (IT) Governance. Chief Information Officer Role and PDF, structure and format. Initial redesign.


Information Technology Pdf

Author:FLORENTINA OLIVAS
Language:English, Spanish, Arabic
Country:Croatia
Genre:Politics & Laws
Pages:636
Published (Last):10.02.2016
ISBN:829-9-59792-756-4
ePub File Size:21.34 MB
PDF File Size:20.80 MB
Distribution:Free* [*Regsitration Required]
Downloads:42044
Uploaded by: EMILY

The Basics – What is Information? • Information is data processed for some purpose. • Information can only be considered to be 'real'. Info if it meets certain. PDF | On Jan 1, , Salah Alkhafaji and others published Fundamentals of Information Technology. this combination of traditional computer and com- munication technologies. The purpose of this book is to help you use and understand information technology.

The statements and opinions contained in proceedings are those of the participants and are not endorsed by other participants, the planning committee, or the National Academies. Consensus Study Reports: Each report has been subjected to a rigorous and independent peer-review process and it represents the position of the National Academies on the statement of task. Computers and Information Technology.

Most Downloaded in Computers and Information Technology last 7 days.

Navigation Menu

Viewing 1 - 10 of books in Computers and Information Technology. Telecommunications Various Networks Model Open Systems Future Prospects Production of Computers The Set of Instructions Business through Internet E-commerce in India The New Generation The Next Generation ONE Introduction While mankind has developed myriad ways of applying and controlling power to dominate and shape our environment, through the use of — tools, weapons, machines, fuels, vehicles, instruments, clothing, buildings and roads, metals, plastics and drugs, agriculture, and electricity — the handling of information has lagged considerably, perhaps because the human brain is itself so remarkably powerful.

Until recently, there have been only three major developments in this area: the invention of written or painted or carved language, some five or six thousand years ago; that of simple arithmetic operations, using what would now be called a digital representation of numbers, about a thousand years later; and that of printing, about five hundred years ago. With written language, we get the capacity to make a permanent record of information and also to convey messages across space and time: storage, retrieval, and communication.

With digital arithmetic, we get the ability to perform accurate, repeatable manipulations of quantitative data. With printing, we can make many identical copies of the same record and so broadcast a single message to a wide and continuing audience.

Beyond these outstanding advances, until the last hundred years or so, the only progress has been in the engineering department with increasingly plentiful production of more powerful and reliable and efficient, faster and cheaper devices to implement these concepts.

In the last hundred years, we have seen the rapidly accelerating advent of a technology so powerful, novel, widespread, and influential that we may indeed call it the Second Industrial Revolution.

In only about 40 years, electronic communications and news media have become commonplace and indispensable. Computers have proliferated, becoming increasingly fast, powerful, small, and cheap, so that now there is scarcely a human activity in which they are not to be found, bearing an increasing share of the burden of repetitive information processing, just as the machines of the First Industrial Revolution have taken over the majority of heavy and unpleasant physical labour.

Now, information can not only be stored, retrieved, communicated, and broadcast in enormous quantities and at phenomenal speeds; but it can also be rearranged, selected, marshalled, and transformed. Until recently, these activities were the sole domain of the human brain.

While creative, judicious, moral, and aesthetic choices are still best left to people, all the tedious and mechanical mental processes can now be relegated to the accurate, fast, and tireless machines.

Any sequence of operations on information that can be precisely specified can be carried out without further human intervention or supervision.

At first, computers were the experimental toys of university researchers; then they became the tools of government establishments and giant corporations, huge, expensive, individually designed and manufactured, and beyond the reach of any but the wealthiest organizations.

With the emergence of powerful, cheap, mass-produced computers-ona-chip, the picture has changed radically. Now we see tiny computers everywhere: in wrist-watches, microwave ovens, electronic games, pocket calculators, cameras, typewriters, musical instruments, etc. What used to be done, with few options, by intricate mechanical devices is now performed, with great flexibility and convenience and at much less expense, by the ubiquitous preprogrammed microcomputer.

Navigation

The probable future has become one of millions of small yet powerful computers, controlling virtually every machine and appliance. These are distributed in every home, on every desk, in every workshop; many of them connected in a maze of small and large networks, much like the present telephone network. This enables individual computers to communicate, sharing information in a gigantic distributed data-base, and gaining, through distributed processing, computational power whose extent is yet difficult to gauge; all this following the individual requirements and choices of the owner or operator of each machine.

Increasingly, we are confronted, not only with the results of the use of computers Introduction 5 throughout industry, commerce, banking, advertising, science, the communications industry, newspapers, airlines, and hospitals; but with the realistic possibility of purchasing computer power for our own small enterprises, offices, and homes.

This may be done in a variety of ways; but in all of them, the real cost of computation is constantly diminishing. Computer programming is likely to become the literacy medium of the third millennium AD. Elementary schools may well be teaching it before long, and we might be well advised to gain at least a smattering of knowledge of computers and of programming competence, especially since computer languages and programming environments are becoming increasingly helpful and friendly to the uninitiated user.

A computer is a machine for the automatic processing of information. Historically, this information was numerical, and computers were machines for doing arithmetic. Unlike the simpler calculating machines, which can perform only one elementary arithmetic operation at a time, they need to be told what to do next usually by suitable button-pushes ; computers can be given a list of operations to perform often with branching and repetitions, depending on tests of sign or value included among the operations , which they will then execute in proper sequence without further intervention.

This sequence of instructions is called a program. A digital computer stores its information in the form of words, finite ordered sets of digits, each of which can have only one of a finite set of values. Considerations of simplicity, reliability and economy dictate that electrical engineers should design computers to consist of a great number of similar pieces of circuitry, each of which can only be in one of two states, usually denoted by 0 and 1.

Such binary digits or bits are the elements of which computer digital representation is built. A row of eight bits is called a byte, and the majority of computers have their storage of information organized in words of one, two, four, or eight bytes 8, 16, 32, or 64 bits.

These may now be viewed as the 16 possible digits of a representation the hexadecimal, or hex , which is much more compact and humanly intelligible than a long string of zeroes and ones.

These are, essentially, 1. The OCU keeps track of the memory location of the next instruction to be executed, and analyzes the current instruction, so as to activate the proper operation of a memory transfer, nonsequential jump by appropriately changing the address of the next instruction , input or output of information, or computation performed by the ALU , as is indicated by the instruction code.

Such registers are usually called accumulators, and they are normally double-length since the product of two k-bit numbers is a 2k-bit number. Other LM registers are used for counting eg.

Originally, the CPU was a sizable piece of electronics, hand-assembled and highly complex. With the advent of micro-miniaturization of circuitry, printing, and photographic techniques, and the mass production of components only the largest computers mainframes are built in the old way.

Smaller systems generally have the entire CPU on a single chip. Among these, the name microcomputer is now applied to those with less than a million words of memory and a word length of one or two bytes; the name minicomputer applies to the larger machines, with two to four-byte words and one to a hundred million words of memory.

The smallest micro is probably more powerful than the big computers used by universities and industry in the s. The main memory mm consists of magnetic or electronic components which store the information both data and instructions needed by the computer.

The individual words are directly addressable from the CPU by number rather like houses in a street , and their contents are retrievable in very short times, of the order of the operation time of the CPU ranging from fractions of a nanosecond, or one billionth of a second, for the fastest mainframes to several microseconds, or millionths of a second, for the slower micros. This is often referred to as high-speed storage or random-access memory RAM.

While most of the mm is erasable and may be changed at will, some memory is used to store constants and often-used utility programs and is not erasable by the CPU: such memory is called read-only memory ROM. Sometimes this is optional and can be plugged into the computer : this is called firmware. Thus, we write 16K for 2U and V4M for Almost all computer instructions comprise an operation code usually one byte long, allowing possible operations to be specified , followed by 8 A Handbook of Information Technology an operand reference number, index, or address of variable length since some operations require more data than others; for instance, the STOP instruction needs no operand, so it is one byte long.

Access times can be quite good for sequential access, along the tape, but random access time is poor at best, running to seconds or even minutes. Economy and a virtually unlimited total storage capacity on numerous cassettes or reels; but only as many units as one has on-line tape-drives are actually accessible without human intervention are the only advantages.

Disk Memory When we wish for practically useful EM, combining large capacity with relative economy and speed of random access, we must turn to drum or disk memory; and, nowadays, the former have been practically replaced by the latter.

Disk memory is of two types: floppy disk and hard disk, the first being the cheaper, slower, smaller-capacity option.

The information is stored on concentric circular tracks not on a single spiral track, as on a sound record , on one or both sides of the disk. The number of tracks and the number of bytes per track vary the density increasing with precision of engineering, and so with cost of the drive , but the total capacity of a floppy disk is in the range of 50KB to 1MB.

The disks rotate at, typically, rpm, and access time is governed by the time required to place the movable head on the right track, a fraction of a second, plus the fifth of a second taken by the head to traverse the circumference of the track, in search of a record; thereafter, consecutive bytes are accessed at some thousands per second. Hard disks are rigid and have larger diameters. There are drives which range from anything from one to a dozen disks, rotating at about ten times the Introduction 9 speed of floppy-disk drives and so diminishing the access time of records in a track with one or several heads.

Fixed-head drives naturally must have a head for each track which costs more , but save head-movement time in random access. Winchester disks are movable-head drives with sealed-in disks, where the heads ride very close to the disk, cushioned by the layer of air between.

In floppy-disk drives, the head actually rides on the disk, eventually wearing it out. The capacity of hard-disk drive ranges from 10 MB to MB in a single drive. Some movable-head hard-disk drives have removable disks or diskpacks, allowing for greater library storage. Increasingly in large computers, and almost universally in small ones, the main input is from the keyboard of a terminal. This is much like a typewriter keyboard, and depressing any key sends an 8-bit code to the computer.

It is quite common for the computer to be connected to several terminals, all competing for its attention.

This is called time-sharing. The computer cycles around the terminals, look for their several inputs while dividing its CPU time among them. The main output of the computer is to the display devices of the terminals; these are either video displays cathode ray tubes, CRT, just like the screens of blackand-white or color TV sets; indeed, simple micros sometimes use ordinary television sets as display devices or printers in so-called hard-copy terminals.

Of course, the computer may be connected to additional video displays and printers, of different qualities, as well as to plotters, a kind of printer for drawing graphs and diagrams.

Many types and speeds of printers exist. Output from the computer can similarly follow the reverse process, yielding visible or audible results, or the control of mechanical or electrical equipment. Thus, computers can draw pictures often, moving pictures , make music and other sounds, and can control appliances, machinery, and whole 10 A Handbook of Information Technology manufacturing processes.

It is also possible to connect several computers in this way. This is called the formation of a computer network. Of course, computers may also be connected by cable, fibre-optics, or microwave link.

Many networks do not have a central computer at all; but are simply a collection of independent computers linked for the sharing of information and, sometimes, computing capabilities. Often, they permit the exchange of messages computer mail and the pooling of data distributed data-base.

They may also share a common bank of memory, accessible to all. Finally, since the invention of computer networks, designers have been investigating the possibilities of computers made up of an array of CPUs multicomputer, parallel processors, distributed processing.

These new ideas are very powerful and farreaching: they will probably revolutionize our ideas of computers, and of their applications, in the next few years. The various peripheral devices are connected to the CPU without examining how. In fact, this may be done in several ways.

We can simply have a separate connection or port for each device, but this limits rather severely the number of devices that may be connected to the CPU. Another way is to have a single bus or connection to which any number of devices may be attached.

The information signal must then carry an appropriate address. The bus receives all signals, and individual devices including the CPU seek out and decode only those addressed to them. It is also possible to have a switching device, which receives addressed data and directs them to the appropriate recipient device, rather like a central post office.

The decision on what communication arrangements to adopt is made on the basis of considerations of cost, capacity, and speed. It is often the case that the several devices forming a computer or a computer network have their data coded in different ways. It is then the job of the CPU s and PPs to share the work of interpreting signals into appropriate codes for each machine.

This is broadly termed the Introduction 11 problem of interfacing devices. Sometimes, the solution is to have a standard code or structure for the communications device one meets the s bus, the rs serial port, the ASCII character-code, and so on. Another interfacing problem arises from the difference in the rate at which different devices can send and receive information this is measured by the baud rate, named after Baudot, the inventor of the first five-hole papertape code; one baud is one bit transferred per second; hence kilobaud, kb, and megabaud, Mb typical rates range from baud to kb.

One solution is to send each piece of information usually one character at a time, which takes 8 to 10 bits only when the last has been acknowledged this is referred to as a handshake ; this is sure, but slow.

Another way is to use a storage buffer in which a large batch of information is accumulated for fast transmission, thus not wasting the time of the faster device. One last kind of choice must be mentioned: some channels of communication are serial they transmit one bit at a time , while others are parallel they can transmit a byte or a whole word at a time ; the latter are obviously faster, more complex, and more expensive.

When devices are connected by cable, the degree of parallel communication is exhibited in the width of a flat ribbon cable, carrying several wires, side by side, and in the number of pins in the plugs and sockets by which they are connected to the machines. Parallel transmission is a variation on multiplexing.

What we have described is generally referred to as the hardware of a computer. Inevitably, there came to be programs that were hard-wired in the now outdated phrase into the computer, in the form of ROM. These are termed firmware.

A computer without software is a helpless set-of circuits, and the expertise required to create the basic software that will bring the machine to useful life is comparable to that required to design the machine itself.

Indeed, these days, computers are designed in cooperation between computer architects, who design what the computer will do, hardware engineers, who design how it will be constructed to be able to do it, and software engineers, who design and program the operating system that will run the machine. Beyond this, the computer will also need application software of many kinds, to enable it to do a variety of jobs, such as file-handling, accounting, statistics, payrolls, inventories, complex graphic displays, games, and so on.

Typically, the application software is written i. Computer Languages The CPU of any computer is designed to accept and execute a specific set of operation codes op-codes , ranging in number from a dozen or so to several hundred.

Different makes and models of computers and microprocessors may have entirely dissimilar op-codes; but the operations that they represent are much more alike than different, both through functional necessity and historical development.

The interpretation of the op-codes is built into the hardware of the OCU though sometimes the details of interpretation may be modified by the user through what is called microcoding , and it is part of this interpretation that the complete instruction being decoded contains a certain amount of further information such as parameters, indices, and one or more memory addresses.

The aggregate of possible machine instructions is called the machine language. Indeed, the slightest error in a program almost always leads to an error in its output usually a fatal error!

10+ Information Technology Project Proposal Examples – PDF, Word

It is estimated that, in the production of a working program, the debugging time may be two to four times as long as the time it takes to plan and write the program initially. To give the reader a feeling for the nature of machine language, we present a simplified, fictitious, but typical, machine language specification. Our computer has two bit accumulator registers acc , X and Y, which may be coupled into a single bit ace XY, with X holding the more and Y the less significant digits; these are attached to the ALU; and a program control register pc Z, also of 16 bits, attached to the OCU, which contains the address of the next instruction to be executed.

Finally, a bit code b refers to each of the 16 bits in a word bit 0 being the least significant-rightmost and bit 15 the most significantleftmost. In some cases, the codes a, d, p, and b are interpreted somewhat differently, depending on the particular op-code c. As an example of a very simple computer program, we consider the solution to the following computer problem. Our computer is to be fed a sequence of one thousand bit numbers at input port 9 these may be keyed in by hand or fed in by a digitizer connected to some experiment.

They are to be stored in memory words with addresses , , ,. Their sum is to be computed and stored in address and output to a printer through output port 2. The program is to be stored beginning at address 0 in the memory. Again, what is important about this example is not its detailed form, but the difficulty of its interpretation, and therefore also the difficulty of verification and debugging.

The programmer must deal with a mass of details that are of a purely mechanical nature and have no relevance to the problem being solved. Higher-level languages After some introductory description of an imaginary computer, and especially of its CPU, and the establishment of some essential notation. Presented with a set of arithmetic and other transformations required by potential users of a proposed new computer, the electronic and logical design engineers seek the simplest circuitry that will execute operations sufficient to 14 A Handbook of Information Technology generate all the required transformations.

Circuits must be simple to be fast, efficient, reliable, and cheap. However, when an instruction has 16 or more bits, most of which should for the sake of efficiency, have some significant effect; the exact and complete specification or the action induced by it may well be somewhat forbiddingly intricate!

And indeed, simplicity of circuitry does not usually lead to simplicity of use. This is where any questions or misunderstandings must be settled before the review board can finalize their decision. Questions usually cover the what, why, when, where, and how of the actual project, but it is possible for the panel to go beyond the basics. Study your RRL. The review of related literature contains a series of publications that are similar to yours.

Here, studies and researches conducted by other authors are used as inspiration to craft your project proposal. You need to consider how you can apply what you have learned from those to the current proposal, and how they may affect the development of your own.

You can check out final year project proposals also. Maintain eye contact.The probable future has become one of millions of small yet powerful computers, controlling virtually every machine and appliance.

Reference Manual on Scientific Evidence: Third Edition

A cost clerk, for instance, receives information from the factory floor which he then processes into other information and transmits to some other source, perhaps management. Internet Tools What used to be done, with few options, by intricate mechanical devices is now performed, with great flexibility and convenience and at much less expense, by the ubiquitous preprogrammed microcomputer. TWO What is Information? This is much like a typewriter keyboard, and depressing any key sends an 8-bit code to the computer.