Contacts

Model of information transfer through technical channels. Transfer of information through technical communication channels. Internet information resources. Information transfer channels

Schematically, the process of information transfer is shown in the figure. It is assumed that there is a source and a recipient of information. The message from the source to the recipient is transmitted through a communication channel (information channel).

Rice. 3. - Information transfer process

In such a process, information is presented and transmitted in the form of a certain sequence of signals, symbols, signs. For example, during a direct conversation between people, sound signals are transmitted - speech, while reading a text a person perceives letters - graphic symbols. The transmitted sequence is called a message. From the source to the receiver, the message is transmitted through some material medium (sound - acoustic waves in the atmosphere, image - light electromagnetic waves). If technical means of communication are used in the transmission process, then they are called information channels(information channels). These include telephone, radio, television.

We can say that the human senses play the role of biological information channels. With their help, the information impact on a person is brought to memory.

Claude Shannon, a diagram of the process of transmitting information through technical communication channels was proposed, shown in the figure.

Rice. 4. - Shannon information transfer process

The operation of such a scheme can be explained in the process of talking on the phone. The source of information is the speaking person. An encoder is a handset microphone that converts sound waves (speech) into electrical signals. The communication channel is the telephone network (wires, switches of telephone nodes through which the signal passes)). The decoding device is the handset (headphone) of the listening person - the receiver of information. Here the incoming electrical signal is converted into sound.

Communication in which the transmission takes place in the form of a continuous electrical signal is called analog communication.

Under coding any transformation of information coming from a source into a form suitable for its transmission over a communication channel is understood.

Currently, digital communication is widely used, when the transmitted information is encoded in binary form (0 and 1 are binary digits), and then decoded into text, image, sound. Digital communication is discrete.

The term "noise" refers to various kinds of interference that distort the transmitted signal and lead to loss of information. Such interferences, first of all, arise for technical reasons: poor quality of communication lines, insecurity from each other of various flows of information transmitted over the same channels. In such cases noise protection is necessary.

First of all, technical methods are used to protect communication channels from the effects of noise. For example, using a screen cable instead of bare wire; the use of various kinds of filters that separate the useful signal from noise, etc.

Claude Shannon developed a special coding theory that provides methods for dealing with noise. One of the important ideas of this theory is that the code transmitted over the communication line must be redundant. Due to this, the loss of some part of the information during transmission can be compensated.

However, the redundancy should not be made too large. This will lead to delays and higher communication costs. The coding theory of K. Shannon just allows you to get such a code that will be optimal. In this case, the redundancy of the transmitted information will be the minimum possible, and the reliability of the received information will be maximum.

In modern digital communication systems, the following technique is often used to combat the loss of information during transmission. The whole message is divided into portions - blocks. For each block, a checksum (the sum of binary digits) is calculated, which is transmitted along with this block. At the place of reception, the checksum of the received block is recalculated, and if it does not match the original, then the transmission of this block is repeated. This will continue until the initial and final checksums match.

Information transfer rate is the information volume of the message transmitted per unit of time. Information flow rate units: bit/s, byte/s, etc.

Technical information communication lines (telephone lines, radio communication, fiber optic cable) have a data rate limit called bandwidth of the information channel. Rate limits are physical in nature.

Noise protection


The operation of this scheme can be explained by the example of telephone communication. The source of information in this system is the speaking person, the receiver, respectively, the listener. The encoder is a handset that converts sound signals into electromagnetic signals. The communication channel is the telephone network. The decoding device is also a handset.

Signal coding, when transmitting information, is any transformation of information coming from a source into a form suitable for its transmission over a communication channel. Currently, the most widely used digital communication, which, by definition, is discrete. In addition, there is also an analog connection, this is a connection in which information is transmitted in the form of a continuous signal (old telephone network standards).

Under " Noise" various kinds of interference distorting the transmitted signal or leading to its loss are implied. Such interference most often occurs due to technical reasons: poor quality of communication lines, insecurity from each other of various information flows transmitted over the same communication channel.

Methods of dealing with "noise":

1. Signal repetition

2. Signal digitization

3. Signal amplification

4. Mechanical means (twisted pair, optical fiber, shielding, etc.)

In addition, coding theory has developed methods for representing transmitted information in order to reduce its loss under the influence of noise.

5.2. Computer networks

Computer network is the connection of two or more computers to each other for sharing access to shared resources. There are three types of resources: hardware, software and information

Under hardware resources it implies the technical provision of general access: a printer, an increased capacity of a hard disk (file server), a host machine, etc.

In general, a computer network can be represented as a set of nodes interconnected by signal propagation media (transmission media, backbones, communication lines). Computer network nodes host communication network elements and computer systems.

Communication networks. The main elements of traditional communication networks are terminal devices (terminals), transmission and switching systems.

Terminals designed to connect sources and receivers of information to the communication network. For example, computers can be connected to them via a dedicated two-wire line or via a modem.

transmission system provides for the transport of information over a distance. They currently support multi-channel signaling over a single backbone.

Switching system is designed to provide communication of a plurality of spatially separated sources and receivers of information. Thanks to interconnected switching systems, a composite (end-to-end) communication channel is formed for participants

Each public network has its own protocols, providing access to certain types of services.

protocols. Under protocol is understood as a set of agreements that guide the components when interacting. In our case protocol there is a standard set of rules that define the presentation (in a particular case, formats) of data and exchange procedures

The process of information transfer can be represented by means of a model in the form of a diagram shown in Figure 3.

Rice. 3. Generalized model of information transmission system

Consider the main elements that make up this model, as well as the transformation of information that occurs in it.

1. Source of information or message (AI) is a material object or subject of information capable of accumulating, storing, transforming and issuing information in the form of messages or signals of various physical nature. It can be a computer keyboard, a person, an analog output of a video camera, etc.

We will consider two types of information sources: if in a finite time interval the information source will create a finite set of messages, it is discrete , and otherwise - continuous . We will discuss sources in more detail in the next lesson.

Information in the form of the original message from the output of the information source is fed to the input of the encoder, including the source encoder (CI) and the channel encoder (CC).

2. Encoder.

2.1.Source encoder provides the transformation of the message into the primary signal - a set of elementary symbols .

Note that the code is a universal way of displaying information during its storage, transmission and processing in the form of a system of one-to-one correspondences between message elements and signals, with the help of which these elements can be fixed. Encoding can always be reduced to the unambiguous transformation of characters from one alphabet into characters from another. At the same time, the code is a rule, a law, an algorithm according to which this transformation is carried out.

The code is a complete set of all possible combinations of symbols of the secondary alphabet, built according to this law. The combinations of characters belonging to a given code are called code words . In each particular case, all or part of the code layers belonging to a given code can be used. Moreover, there are "powerful codes", all combinations of which are almost impossible to display. Therefore, by the word "code" we mean, first of all, the law according to which the transformation is carried out, as a result of which we obtain code words, the full set of which belongs to this code, and not to some other one constructed according to a different law.

Symbols of the secondary alphabet, regardless of the basis of the code, are only message carriers. In this case, the message is the letter of the primary alphabet, regardless of the specific physical or semantic content that it reflects.

Thus, the purpose of the source encoder is to present the information in the most compact form. This is necessary in order to efficiently use the resources of the communication channel or storage device. The issues of source coding will be discussed in more detail in topic No. 3.

2.2.Channel encoder. When transmitting information over a noisy communication channel, errors may occur in the received data. If such errors are of small magnitude or occur rarely enough, the information can be used by the consumer. With a large number of errors, the information received cannot be used.

Channel encoding, or error-correcting coding, is a method of processing transmitted data, providing decrease number of errors arising during transmission over a noisy channel.

At the output of the channel encoder, as a result, a sequence of code symbols is formed, called code sequence . The issues of channel coding will be considered in more detail in topic No. 5, as well as in the course "Theory of electrical communication".

It should be noted that both error-correcting coding and data compression are not mandatory operations in the transmission of information. These procedures (and their corresponding blocks in the block diagram) may not be present. However, this can lead to very significant losses in the noise immunity of the system, a significant decrease in the transmission rate and a decrease in the quality of information transmission. Therefore, practically all modern systems (with the possible exception of the simplest ones) must include and necessarily include both efficient and error-correcting data coding.

3. Modulator. If it is necessary to transmit messages, the symbols of the secondary alphabet are assigned specific physical qualitative features. The process of influencing an encoded message in order to turn it into a signal is called modulation . Modulator functions - message negotiation source or code sequences generated by the encoder, co communication line properties and enabling simultaneous transmission of a large number of messages over a common communication channel.

Therefore, the modulator must convert messages source or their corresponding code sequences into signals, (to superimpose messages on signals), the properties of which would provide them with the possibility of efficient transmission over existing communication channels. In this case, the signals belonging to a plurality of information transmission systems operating, for example, in a common radio channel, must be such that independent transmission of messages from all sources to all recipients of information is ensured. Various modulation methods are studied in detail in the course "Theory of electrical communication".

It can be said that the appointment encoder and modulator is the coordination of the source of information with the communication line.

4. Communication line is the medium in which signals carrying information propagate. Do not confuse communication channel and communication line. Link - a set of technical means designed to transmit information from a source to a recipient.

Depending on the propagation medium, there are radio channels, wired, fiber-optic, acoustic, etc. channels. There are many models that describe communication channels with a greater or lesser degree of detail, however, in the general case, the signal passing through the communication channel is subject to attenuation, acquires some time delay (or phase shift) and becomes noisy.

To increase the throughput of communication lines, messages from several sources can be transmitted simultaneously through them. This approach is called seal. In this case, messages from each source are transmitted over their own communication channel, although they have a common communication line.

Mathematical models of communication channels will be considered in the course "Theory of electrical communication". The informational characteristics of communication channels will be considered in detail within the framework of our discipline when studying topic No. 4.

5. Demodulator . The received (reproduced) message, due to the presence of interference, generally differs from the sent one. The received message will be called an estimate (meaning an estimate of the message).

To reproduce the evaluation of the message, the receiver of the system must first according to the accepted swing and taking into account information about the data used in the transfer the form of a signal and modulation method get code sequence estimate, called accepted sequence. This procedure is called demodulation, detection or signal reception. In this case, demodulation must be performed in such a way that the received sequence differs to a minimum extent from the transmitted code sequence. The issues of optimal reception of signals in radio engineering systems are the subject of study of the TES course.

6. Decoder.

6.1. Channel decoder. The received sequences may generally differ from the transmitted code words, that is, they may contain errors. The number of such errors depends on the level of interference in the communication channel, the transmission rate, the signal selected for transmission and the modulation method, as well as on the method of reception (demodulation). Channel decoder task- detect and, if possible, to correct these errors. The procedure for detecting and correcting errors in the received sequence is called channel decoding . The result of decoding is the evaluation of the information sequence. The choice of error-correcting code, coding method, and decoding method should be made so that the output of the channel decoder has as few uncorrected errors as possible.

The issues of error-correcting coding/decoding in information transmission (and storage) systems are currently given exceptional attention, since this technique can significantly improve the quality of its transmission. In many cases, when the requirements for the reliability of the received information are very high (in computer networks for data transmission, in remote control systems, etc.), transmission without error-correcting coding is generally impossible.

6.2. Source decoder. Since the source information was encoded during transmission in order to have a more compact (or more convenient) representation ( data compression, economical coding, source encoding), it is necessary to restore it to its original (or almost original) form according to the accepted sequence. The recovery process is called source decoding and can either be simply the inverse of the encoding operation (non-destructive encoding/decoding) or restore an approximate value of the original information. The recovery operation will also include the recovery, if necessary, of a continuous function from a set of discrete values ​​of estimates.

It must be said that recently economical coding has become increasingly prominent in information transmission systems, since, together with error-correcting coding, this has turned out to be the most effective way to increase the speed and quality of its transmission.

7.Recipient of information - a material object or subject that perceives information in all forms of its manifestation for the purpose of its further processing and use.

The recipients of information can be both people and technical means that accumulate, store, transform, transmit or receive information.

Information transfer is a term that combines many physical processes of information movement in space. Any of these processes involves such components as the source and receiver of data, the physical carrier of information and the channel (medium) of its transmission.

Information transfer process

The initial receptacles of data are various messages transmitted from their sources to receivers. Between them are channels for transmitting information. Special technical converter devices (encoders) form physical data carriers - signals - based on the content of messages. The latter are subjected to a number of transformations, including coding, compression, modulation, and then sent to the communication lines. After passing through them, the signals undergo inverse transformations, including demodulation, decompression and decoding, as a result of which the original messages are extracted from them and perceived by the receivers.

Information messages

A message is a kind of description of a phenomenon or object, expressed as a set of data that has signs of a beginning and an end. Some messages, such as speech and music, are continuous functions of sound pressure time. In telegraph communication, a message is the text of a telegram in the form of an alphanumeric sequence. A television message is a sequence of messages-frames that the camera lens “sees” and captures them at a frame rate. The vast majority of messages transmitted recently through information transmission systems are numerical arrays, text, graphics, as well as audio and video files.

Information signals

The transmission of information is possible if it has a physical carrier, the characteristics of which change depending on the content of the transmitted message in such a way that they overcome the transmission channel with minimal distortion and can be recognized by the receiver. These changes in the physical storage medium form an information signal.

Today, information is transmitted and processed using electrical signals in wired and radio communication channels, as well as thanks to optical signals in FOCL.

Analog and digital signals

A well-known example of an analog signal, i.e. continuously changing in time is the voltage taken from the microphone, which carries a speech or musical information message. It can be amplified and wired to the sound systems of the concert hall, which will carry speech and music from the stage to the audience in the gallery.

If, in accordance with the magnitude of the voltage at the output of the microphone, the amplitude or frequency of high-frequency electrical oscillations in the radio transmitter is continuously changed in time, then an analog radio signal can be transmitted on the air. The TV transmitter in the analog television system generates an analog signal in the form of a voltage proportional to the current brightness of the image elements perceived by the camera lens.

However, if the analog voltage from the microphone output is passed through a digital-to-analog converter (DAC), then its output will no longer be a continuous function of time, but a sequence of readings of this voltage taken at regular intervals with a sampling frequency. In addition, the DAC also performs quantization according to the level of the initial voltage, replacing the entire possible range of its values ​​with a finite set of values ​​determined by the number of binary digits of its output code. It turns out that a continuous physical quantity (in this case, this is voltage) turns into a sequence of digital codes (digitized), and then it can be stored, processed and transmitted in digital form through information transmission networks. This significantly increases the speed and noise immunity of such processes.

Information transfer channels

Usually, this term refers to the complexes of technical means involved in the transmission of data from the source to the receiver, as well as the environment between them. The structure of such a channel, using typical means of information transmission, is represented by the following sequence of transformations:

II - PS - (KI) - KK - M - LPI - DM - DC - DI - PS

AI is a source of information: a person or another living being, a book, a document, an image on a non-electronic medium (canvas, paper), etc.

PS is a converter of information message into information signal, which performs the first stage of data transmission. Microphones, television and video cameras, scanners, fax machines, PC keyboards, etc. can act as PS.

CI is an information encoder in an inform signal to reduce the volume (compression) of information in order to increase its transmission rate or reduce the frequency band required for transmission. This link is optional, as shown in parentheses.

KK - channel encoder to increase the noise immunity of the informsignal.

M is a signal modulator for changing the characteristics of intermediate carrier signals depending on the value of the information signal. A typical example is the amplitude modulation of a carrier signal of a high carrier frequency depending on the value of a low-frequency information signal.

LPI - an information transmission line representing a combination of the physical environment (for example, an electromagnetic field) and technical means for changing its state in order to transmit a carrier signal to the receiver.

DM is a demodulator for separating the information signal from the carrier signal. Present only in the presence of M.

DC - channel decoder for detecting and/or correcting errors in the information signal that occurred on the LPI. Present only in the presence of CC.

DI - information decoder. Present only in the presence of CI.

PI - information receiver (computer, printer, display, etc.).

If the transmission of information is two-way (duplex channel), then on both sides of the LPI there are modem units (MODulator-DEModulator) that combine M and DM links, as well as codec units (COder-DEcoder) that combine encoders (KI and KK) and decoders (DI and DC).

Characteristics of transmission channels

The main distinguishing features of the channels include bandwidth and noise immunity.

In the channel, the information signal is exposed to noise and interference. They can be caused by natural causes (for example, atmospheric for radio channels) or be specially created by the enemy.

The noise immunity of transmission channels is increased by using various kinds of analog and digital filters to separate information signals from noise, as well as special message transmission methods that minimize the effect of noise. One of these methods is the addition of extra characters that do not carry useful content, but help to control the correctness of the message, as well as correct errors in it.

The bandwidth of the channel is equal to the maximum number of binary symbols (kbps) transmitted by it in the absence of interference in one second. For different channels, it varies from a few kbps to hundreds of Mbps and is determined by their physical properties.

Information transfer theory

Claude Shannon is the author of a special theory of coding transmitted data, who discovered methods for combating noise. One of the main ideas of this theory is the need for redundancy of the digital code transmitted over information transmission lines. This allows you to restore the loss if some part of the code is lost during its transmission. Such codes (digital information signals) are called noise-immune. However, code redundancy should not be taken too far. This leads to the fact that the transmission of information is delayed, as well as to the rise in the cost of communication systems.

Digital signal processing

Another important component of the theory of information transmission is a system of methods for digital signal processing in transmission channels. These methods include algorithms for digitizing initial analog inform signals with a certain sampling rate determined on the basis of Shannon's theorem, as well as methods for generating noise-protected carrier signals on their basis for transmission over communication lines and digital filtering of received signals in order to separate them from interference.

The first technical means of transmitting information over a distance was the telegraph, invented in 1837 by the American Samuel Morse. In 1876, the American A. Bell invents the telephone. Based on the discovery of electromagnetic waves by the German physicist Heinrich Hertz (1886), A.S. Popov in Russia in 1895 and almost simultaneously with him in 1896 G. Marconi in Italy, radio was invented. Television and the Internet appeared in the twentieth century.

All of the listed technical methods of information communication are based on the transmission of a physical (electrical or electromagnetic) signal over a distance and are subject to certain general laws. The study of these laws is communication theory that emerged in the 1920s. Mathematical apparatus of communication theory - mathematical theory of communication, developed by the American scientist Claude Shannon.

Claude Elwood Shannon (1916–2001), USA

Claude Shannon proposed a model for the process of transmitting information through technical communication channels, represented by a diagram.

Technical information transmission system

Encoding here means any transformation of information coming from a source into a form suitable for its transmission over a communication channel. Decoding- inverse transformation of the signal sequence.

The operation of such a scheme can be explained by the familiar process of talking on the phone. The source of information is the speaking person. An encoder is a handset microphone that converts sound waves (speech) into electrical signals. The communication channel is the telephone network (wires, switches of telephone nodes through which the signal passes). The decoding device is a handset (headphone) of the listening person - the receiver of information. Here the incoming electrical signal is converted into sound.

Modern computer systems for transmitting information - computer networks - work on the same principle. There is an encoding process that converts a binary computer code into a physical signal of the type that is transmitted over a communication channel. Decoding is the reverse transformation of the transmitted signal into computer code. For example, when using telephone lines in computer networks, the functions of encoding and decoding are performed by a device called a modem.



Channel capacity and information transfer rate

Developers of technical information transmission systems have to solve two interrelated tasks: how to ensure the highest speed of information transfer and how to reduce information loss during transmission. Claude Shannon was the first scientist who took on the solution of these problems and created a new science for that time - information theory.

K.Shannon determined the method of measuring the amount of information transmitted over communication channels. They introduced the concept channel bandwidth,as the maximum possible information transfer rate. This speed is measured in bits per second (as well as kilobits per second, megabits per second).

The throughput of a communication channel depends on its technical implementation. For example, computer networks use the following means of communication:

telephone lines,

Electrical cable connection,

fiber optic cabling,

Radio communication.

Throughput of telephone lines - tens, hundreds of Kbps; the throughput of fiber optic lines and radio communication lines is measured in tens and hundreds of Mbps.

Noise, noise protection

The term "noise" refers to various kinds of interference that distort the transmitted signal and lead to loss of information. Such interference primarily occurs due to technical reasons: poor quality of communication lines, insecurity from each other of various information flows transmitted over the same channels. Sometimes, while talking on the phone, we hear noise, crackling, which make it difficult to understand the interlocutor, or the conversation of completely different people is superimposed on our conversation.

The presence of noise leads to the loss of transmitted information. In such cases noise protection is necessary.

First of all, technical methods are used to protect communication channels from the effects of noise. For example, using shielded cable instead of bare wire; the use of various kinds of filters that separate the useful signal from noise, etc.

Claude Shannon developed coding theory, which gives methods for dealing with noise. One of the important ideas of this theory is that the code transmitted over the communication line must be redundant. Due to this, the loss of some part of the information during transmission can be compensated. For example, if you are hard to hear when talking on the phone, then by repeating each word twice, you have a better chance that the interlocutor will understand you correctly.

However, you can not make the redundancy too large. This will lead to delays and higher communication costs. Coding theory allows you to get a code that will be optimal. In this case, the redundancy of the transmitted information will be the minimum possible, and the reliability of the received information will be the maximum.

In modern digital communication systems, the following technique is often used to combat the loss of information during transmission. The whole message is divided into portions - packages. For each package is calculated check sum(sum of binary digits) that is transmitted with this packet. At the place of reception, the checksum of the received packet is recalculated and, if it does not match the original sum, the transmission of this packet is repeated. This will continue until the initial and final checksums match.

Considering the transfer of information in propaedeutic and basic computer science courses, first of all, this topic should be discussed from the position of a person as a recipient of information. The ability to receive information from the surrounding world is the most important condition for human existence. The human sense organs are the information channels of the human body, carrying out the connection of a person with the external environment. On this basis, information is divided into visual, auditory, olfactory, tactile, and gustatory. The rationale for the fact that taste, smell and touch carry information to a person is as follows: we remember the smells of familiar objects, the taste of familiar food, we recognize familiar objects by touch. And the content of our memory is stored information.

Students should be told that in the animal world the informational role of the senses is different from the human one. The sense of smell performs an important informational function for animals. The heightened sense of smell of service dogs is used by law enforcement agencies to search for criminals, detect drugs, etc. The visual and sound perception of animals differs from that of humans. For example, bats are known to hear ultrasound, and cats are known to see in the dark (from a human perspective).

Within the framework of this topic, students should be able to give specific examples of the process of transmitting information, determine for these examples the source, receiver of information, and the channels used for transmitting information.

When studying computer science in high school, students should be introduced to the basic provisions of the technical theory of communication: the concepts of coding, decoding, information transfer rate, channel capacity, noise, noise protection. These issues can be considered within the framework of the topic “Technical means of computer networks”.

Number representation

Numbers in mathematics

The number is the most important concept of mathematics, which has evolved and evolved over a long period of human history. People have been working with numbers since ancient times. Initially, a person operated only with positive integers, which are called natural numbers: 1, 2, 3, 4, ... For a long time there was an opinion that there is the largest number, “more than this the human mind can understand” (as they wrote in the Old Slavonic mathematical treatises) .

The development of mathematical science has led to the conclusion that there is no largest number. From a mathematical point of view, the series of natural numbers is infinite, i.e. is not limited. With the advent of the concept of a negative number in mathematics (R. Descartes, XVII century in Europe; in India much earlier), it turned out that the set of integers is unlimited both “left” and “right”. The mathematical set of integers is discrete and unlimited (infinite).

The concept of a real (or real) number was introduced into mathematics by Isaac Newton in the 18th century. From a mathematical point of view the set of real numbers is infinite and continuous. It includes many integers and an infinite number of non-integers. Between any two points on the number axis lies an infinite set of real numbers. The concept of a real number is associated with the idea of ​​a continuous numerical axis, any point of which corresponds to a real number.

Integer representation

In computer memory numbers are stored in binary number system(cm. " Number systems” 2). There are two forms of representing integers in a computer: unsigned integers and signed integers.

Integers without a sign - it the set of positive numbers in the range, where k- this is the bit depth of the memory cell allocated for the number. For example, if a memory cell of 16 bits (2 bytes) is allocated for an integer, then the largest number will be:

In decimal, this corresponds to: 2 16 - 1 \u003d 65 535

If all digits of the cell are zeros, then it will be zero. Thus, 2 16 = 65 536 integers are placed in a 16-bit cell.

Signed integersis the set of positive and negative numbers in the range[–2k –1 , 2k-eleven]. For example, when k= 16 integer representation range: [–32768, 32767]. The high order of the memory cell stores the sign of the number: 0 - positive number, 1 - negative number. The largest positive number 32,767 has the following representation:

For example, the decimal number 255, after being converted to binary and inserted into a 16-bit memory cell, will have the following internal representation:

Negative integers are represented in two's complement. Additional code positive number N- it such its binary representation, which, when added to the code of the number N, gives the value 2k. Here k- the number of bits in the memory cell. For example, the additional code for the number 255 would be:

This is the representation of the negative number -255. Let's add the codes of numbers 255 and -255:

The one in the highest order “dropped out” of the cell, so the sum turned out to be zero. But this is how it should be: N + (–N) = 0. The computer processor performs the subtraction operation as an addition with the additional code of the subtracted number. In this case, the overflow of the cell (exceeding the limit values) does not cause the interruption of the program execution. This circumstance the programmer must know and take into account!

Format for representing real numbers in a computer called floating point format. real number R represented as a product of the mantissa m based on the number system n to some extent p, which is called the order: R= m ? np.

The representation of a number in floating point form is ambiguous. For example, for the decimal number 25.324, the following equalities are true:

25.324 = 2.5324? 10 1 = 0.0025324? 10 4 \u003d 2532.4? 10 -2, etc.

To avoid ambiguity, we agreed to use the computer a normalized representation of a number in floating point form. Mantissa in the normalized representation must satisfy the condition: 0.1 n m < 1n. In other words, the mantissa is less than one and the first significant digit is not zero. In some cases, the normalization condition is taken as follows: 1 n m < 10n.

V computer memory mantissa represented as an integer containing only significant digits(0 integers and commas are not stored). Therefore, the internal representation of a real number is reduced to the representation of a pair of integers: mantissa and exponent.

Different types of computers use different ways of representing numbers in floating point form. Consider one of the variants of the internal representation of a real number in a four-byte memory cell.

The cell must contain the following information about the number: the sign of the number, the exponent, and the significant digits of the mantissa.

The sign of the number is stored in the most significant bit of the 1st byte: 0 means plus, 1 means minus. The remaining 7 bits of the first byte contain machine order. The next three bytes store the significant digits of the mantissa (24 bits).

Binary numbers in the range from 0000000 to 1111111 are placed in seven binary digits. This means that the machine order varies in the range from 0 to 127 (in decimal number system). There are 128 values ​​in total. The order, obviously, can be either positive or negative. It is reasonable to divide these 128 values ​​equally between positive and negative order values: from -64 to 63.

Machine orderbiased relative to the mathematical and has only positive values. The offset is chosen so that the minimum mathematical value of the order corresponds to zero.

The relationship between machine order (Mp) and mathematical order (p) in the case under consideration is expressed by the formula: Mp = p + 64.

The resulting formula is written in the decimal system. In binary, the formula looks like: Mp 2 = p 2 + 100 0000 2 .

To write the internal representation of a real number, you must:

1) translate the modulus of a given number into a binary number system with 24 significant digits,

2) normalize a binary number,

3) find the machine order in the binary system,

4) taking into account the sign of the number, write out its representation in a four-byte machine word.

Example. Write the internal representation of the number 250.1875 in floating point form.

Solution

1. Let's translate it into a binary number system with 24 significant digits:

250,1875 10 = 11111010,0011000000000000 2 .

2. Let's write in the form of a normalized binary floating point number:

0.111110100011000000000000 H 10 2 1000 .

Here is the mantissa, the base of the number system
(2 10 \u003d 10 2) and the order (8 10 \u003d 1000 2) are written in binary.

3. Calculate the machine order in the binary system:

MP2 = 1000 + 100 0000 = 100 1000.

4. Let's write the representation of the number in a four-byte memory cell, taking into account the sign of the number

Hexadecimal form: 48FA3000.

The range of real numbers is much wider than the range of integers. Positive and negative numbers are arranged symmetrically about zero. Therefore, the maximum and minimum numbers are equal in absolute value.

The smallest absolute number is zero. The largest floating-point number in absolute value is the number with the largest mantissa and the largest exponent.

For a four-byte machine word, this number would be:

0.11111111111111111111111 10 2 1111111 .

After converting to the decimal number system, we get:

MAX = (1 - 2 -24) 2 63 10 19 .

If, when calculating with real numbers, the result is outside the allowable range, then the program execution is interrupted. This happens, for example, when dividing by zero, or by a very small number close to zero.

Real numbers whose mantissa bit length exceeds the number of bits allocated for the mantissa in a memory cell are represented in the computer approximately (with a “truncated” mantissa). For example, the rational decimal number 0.1 in a computer will be represented approximately (rounded) because in the binary system its mantissa has an infinite number of digits. The consequence of this approximation is the error of machine calculations with real numbers.

The computer performs calculations with real numbers approximately. The error of such calculations is calledmachine rounding error.

The set of real numbers that can be exactly represented in the computer memory in floating point form is limited and discrete.. Discreteness is a consequence of the limited number of digits of the mantissa, as discussed above.

The number of real numbers that can be exactly represented in computer memory can be calculated by the formula: N = 2t · ( UL+ 1) + 1. Here t- the number of binary digits of the mantissa; U- the maximum value of the mathematical order; L- minimum order value. For the representation option considered above ( t = 24, U = 63,
L
= -64) it turns out: N = 2 146 683 548.

The topic of representing numerical information in a computer is present both in the standard for elementary school and for high school.

In the basic school (basic course) it is enough to consider the representation of integers in a computer. The study of this issue is possible only after getting acquainted with the topic “Number systems”. In addition, from the principles of computer architecture, students should be aware that a computer works with a binary number system.

Considering the representation of integers, the main attention should be paid to the limited range of integers, to the connection of this range with the capacity of the allocated memory cell - k. For positive numbers (unsigned): , for positive and negative numbers (signed): [–2 k –1 , 2k –1 – 1].

Obtaining the internal representation of numbers should be analyzed with examples. After that, by analogy, students should independently solve such problems.

Example 1 Get the signed internal representation of the integer 1607 in a two-byte memory location.

Solution

1) Convert the number to the binary system: 1607 10 = 11001000111 2 .

2) Adding zeros to 16 digits on the left, we get the internal representation of this number in the cell:

It is desirable to show how the hexadecimal form is used for the compressed form of this code, which is obtained by replacing each four binary digits with one hexadecimal digit: 0647 (see “ Number systems” 2).

More difficult is the problem of obtaining the internal representation of a negative integer (– N) - additional code. You need to show the students the algorithm of this procedure:

1) get the internal representation of a positive number N;

2) get the return code of this number by replacing 0 with 1 and 1 with 0;

3) add 1 to the resulting number.

Example 2 Get the internal representation of the negative integer -1607 in a two-byte memory location.

Solution

It is useful to show students what the internal representation of the smallest negative number looks like. In a two-byte cell, this is -32,768.

1) it is easy to convert the number 32 768 to the binary number system, since 32 768 = 2 15. Therefore, in binary it is:

2) write the reverse code:

3) add one to this binary number, we get

The one in the first bit means the minus sign. No need to think that the received code is minus zero. This is -32,768 in two's complement form. These are the rules for machine representation of integers.

After showing this example, have the students prove for themselves that adding the number codes 32767 + (-32768) results in the number code -1.

According to the standard, the representation of real numbers should be studied in high school. When studying computer science in grades 10–11 at the basic level, it is enough to tell students about the main features of a computer with real numbers: about the limited range and interrupting the program when it goes beyond it; about the error of machine calculations with real numbers, that the computer performs calculations with real numbers more slowly than with integers.

Studying at the profile level requires a detailed analysis of how to represent real numbers in floating point format, an analysis of the features of performing calculations on a computer with real numbers. A very important problem here is the estimation of the calculation error, the warning against loss of value, against interruption of the program. Detailed material on these issues is available in the training manual.

Notation

Notation - this is a way of representing numbers and the corresponding rules for operating on numbers. The various number systems that existed before and are used today can be divided into non-positional and positional. Signs used when writing numbers, are called numbers.

V non-positional number systems the value of a digit does not depend on its position in the number.

An example of a non-positional number system is the Roman system (Roman numerals). In the Roman system, Latin letters are used as numbers:

Example 1 The number CCXXXII consists of two hundred, three tens and two units and is equal to two hundred and thirty two.

Roman numerals are written from left to right in descending order. In this case, their values ​​are added. If a smaller number is written on the left, and a large number on the right, then their values ​​are subtracted.

Example 2

VI = 5 + 1 = 6; IV \u003d 5 - 1 \u003d 4.

Example 3

MCMXCVIII = 1000 + (-100 + 1000) +

+ (–10 + 100) + 5 + 1 + 1 + 1 = 1998.

V positional number systems the value denoted by a digit in a number entry depends on its position. The number of digits used is called the base of the positional number system.

The number system used in modern mathematics is positional decimal system. Its base is ten, because Any numbers are written using ten digits:

0, 1, 2, 3, 4, 5, 6, 7, 8, 9.

The positional nature of this system is easy to understand by the example of any multi-digit number. For example, in the number 333, the first three means three hundred, the second - three tens, the third - three units.

To write numbers in a positional system with a base n Must have alphabet from n digits. Usually for this n < 10 используют n first Arabic numerals, and n> 10 letters are added to ten Arabic numerals. Here are examples of alphabets from several systems:

If it is required to indicate the base of the system to which the number belongs, then it is assigned a subscript to this number. For instance:

1011012, 36718, 3B8F16.

In the base number system q (q-ary number system) units of digits are successive powers of a number q. q units of any category form the unit of the next category. To write a number to q-ary number system required q various characters (numbers) representing the numbers 0, 1, ..., q– 1. Writing a number q v q-ary number system has the form 10.



Liked the article? Share it