resourceone.info Politics Probability And Stochastic Processes Yates Pdf

# PROBABILITY AND STOCHASTIC PROCESSES YATES PDF

Thursday, October 10, 2019

pdf. Probability-and-Stochastic-Processes-2nd-Roy-D-Yates-and-David-J- Goodman Probability and Stochastic Processes A Friendly Introduction for Electrical. PDF | R. D. Yates and others published Probability and Stochastic Processes: A Friendly Introduction for Electrical and Computer Engineers. Yates - Probability and Stochastic Processes (2nd Edition) - Ebook download as PDF File .pdf), Text File .txt) or read book online. Author: JULIETTA PEEDIN Language: English, Spanish, Portuguese Country: Philippines Genre: Personal Growth Pages: 398 Published (Last): 26.09.2016 ISBN: 701-9-18059-161-1 ePub File Size: 29.55 MB PDF File Size: 18.35 MB Distribution: Free* [*Regsitration Required] Downloads: 35179 Uploaded by: MAMIE This text introduces engineering students to probability theory and stochastic processes. Selection from Probability and Stochastic Processes: A Friendly Introduction for Electrical and by David J. Goodman, Roy D. Yates Continuous Functions of Two Continuous Random Variables · PDF of the Sum of Two. Probability and Stochastic Processes. A Friendly Roy D. Yates, David J. Goodman, David Famolari. August 27, 1 A manual resourceone.info describing the.m functions in resourceone.info – The quiz solutions manual quizsol. pdf. for electrical & computer engineers / Roy D. Yates, David J. Goodman. p. cm. Includes PREFACE. When we started teaching the course Probability and Stochastic Processes to Rutgers .. PDF of the Sum of Two Random Variables

Simulate the transmission of packets, each containing bits. Count the number of packets decoded cor- rectly. How many devices were found to work? Perform10 rep- etitions of the trials. Are your results fairly consistent? Use the ndgrid function.

Recall that a particular op- eration has six components. Each component has a failure probability q, independent of any other com- ponent. For each replacement of a regular component, perform trials. It begins with a physical model of an experiment. The set of all possible observations, S, is the sample space of the experiment. S is the beginning of the mathematical probability model. In addition to S, the mathematical model includes a rule for assigning numbers between 0 and 1 to sets A in S.

In this chapter and for most of the remainder of the course, we will examine probability models that assign numbers to the outcomes in the sample space. When we observe one of these numbers, we refer to the observation as a random variable. In our notation, the name of a randomvariable is always a capital letter, for example, X.

The set of possible values of X is the range of X. Since we often consider more than one random variable at a time, we denote the range of a random variable by the letter S with a subscript which is the name of the randomvariable. We use S X to denote the range of X because the set of all possible values of X is analogous to S, the set of all possible outcomes of an experiment.

A probability model always begins with an experiment. Each randomvariable is related directly to this experiment. There are three types of relationships. The random variable is the observation. Example 2. Each observation is a random variable X. In this case, S X , the range of X, and the sample space S are identical. The random variable is a function of the observation. The sample space S consists of the 64 possible sequences. A random variable related to this experiment is N, the number of accepted circuits.

The random variable is a function of another random variable. The remainder of this chapter will develop methods to characterize probability models for random variables. We observe that in the preceding examples, the value of a random variable can always be derived from the outcome of the underlying experiment. This is not a coincidence. As we saw in Example 2. In these cases, the experiments do not produce random variables.

We refer to experiments with parameter settings that do not produce random variables as improper experiments. The observation is T seconds, the time elapsed until the rocket returns to Earth.

Under what conditions is the experiment improper? At low velocities, V, the rocket will return to Earth at a random time T seconds that 2. On occasion, it is important to identify the random variable X by the function X s that maps the sample outcome s to the corresponding value of the random variable X. Random variables A and C are discrete random variables. The possible values of these random variables form a countable set. The underlying experiments have sample spaces that are discrete.

The random variable M can be any nonnegative real number. It is a continuous randomvariable. Its experiment has a continuous sample space. In this chapter, we study the properties of discrete random variables.

## Yates - Probability and Stochastic Processes (2nd Edition)

Chapter 3 covers continuous random variables. Often, but not always, a discrete random variable takes on integer values. An exception is the randomvariable related to your probability grade. The experiment is to take this course and observe your grade. Have you thought about why we transformletter grades to numerical values? We believe the principal reason is that it allows us to compute averages.

In general, this is also the main reason for introducing the concept of a random variable. In the mathematics of probability, averages are called expectations or expected values of randomvariables.

We introduce expected values formally in Section 2. In each course, the student will earn a B with probability 0. The argument of a PMF ranges over all real numbers. On the other hand, P X x is a function ranging over all real numbers x. Observe our notation for a random variable and its PMF. We usually use the corresponding lowercase letter x to denote a possible value of the random variable.

The notation for the PMF is the letter P with a subscript indicating the name of the random variable. In these examples, r and x are just dummy variables. We graph a PMF by marking on the horizontal axis each value with nonzero probability and drawing a vertical bar with length proportional to the probability. From Example 2. Do not omit this line in your expressions of PMFs. Each shot that was good was worth 1 point. What is the PMF of X, the number of points that he scored? There are four outcomes of this experiment: The random variable X has three possible values corresponding to three events: The following theorem applies the three axioms of probability to discrete random variables.

Theorem 2. Proof All three properties are consequences of the axioms of probability Section 1. We have also introduced the probability mass 2.

In practical applications, certain families of random variables appear over and over again in many experiments. In each family, the probability mass functions of all the random variables have the same mathematical form.

They differ only in the values of one or two parameters. There is one formula for the PMF of all the random variables in a family.

Depending on the family, the PMF formula contains one or two parameters. Our nomenclature for a family consists of the family name followed by one or two parameters in parentheses. For example, binomial n, p refers in general to the family of binomial randomvariables. Binomial 7, 0. Appendix Asummarizes important properties of 17 families of random variables.

Observe whether the side facing up is heads or tails.

## EE250_F17_HW8.pdf - Probability and Stochastic Processes A...

Let X be the number of heads observed. Let X be the value of the bit 0 or 1. The PMF in Example 2. In the following examples, we use an integrated circuit test procedure to represent any experiment with two possible outcomes.

In this particular experiment, the outcome r, that a circuit is a reject, occurs with probability p. Some simple experiments that involve tests of integrated circuits will lead us to the Bernoulli, binomial, geometric, and Pas- cal random variables.

These six families of random variables occur often in practical applications. With probability p, the circuit is rejected. Let X be the number of rejected circuits in one test. What is the PMF of Y? The procedure is to keep testing circuits until a reject appears. Let K equal the number of rejects in the n tests. Adopting the vocabulary of Section 1. Whenever we have a sequence of n independent trials each with success probability p, the number of successes is a binomial random variable.

In general, for a binomial n, p random variable, we call n the number of trials and p the success probability. Let L equal the number of tests. What is the PMF of L? A , success on attempt l. For a sequence of n independent trials with success probability p, a Pascal random variable is the number of trials up to and including the kth success.

The random variable N is the number of spots that appears on the side facing up. Therefore, N is a discrete uniform 1, 6 random variable and 0 5 0 0. While the time of each occurrence is completely random, there is a known average number of occurrences per unit time.

We will return to Poisson random variables many times in this text. At this point, we consider only the basic properties. To describe a Poisson random variable, we will call the occurrence of the phenomenon of interest an arrival. What is the probability that there are no hits in an interval of 0. What is the probability that there are no more than two hits in an interval of one second? In an interval of 0.

What is the probability that there will be no queries processed in a second interval? What is the probability that at least two queries will be processed in a 2-second interval? It makes an error with probability p, independent of whether any other bit is received correctly.

The two functions are closely related. Each can be obtained easily from the other. For any real number x, the CDFis the probability that the randomvariable X is no larger than x. All random variables have cumulative distribution functions but only discrete random variables have probability mass functions.

The notation convention for the CDF follows that of the PMF, except that we use the letter F with a subscript corresponding to the name of the random variable. The height of the jump at x i is P X x i. To sketch a CDF of a discrete random variable, we draw a graph with the vertical value beginning at zero at the left end of the horizontal axis negative numbers with large magnitude.

The graph jumps by an amount P X x i at each x i with nonzero probability. We draw the graph of the CDF as a staircase with jumps at each x i with nonzero probability. The CDF is the upper value of every jump in the staircase. Math texts call this the right hand limit of F R r. Familiarity with the geometric series is essential for calculatingprobabilitiesinvolvinggeometricrandomvariables.

AppendixBsumma- rizes the most important facts. In particular, Math Fact B. For an integer-valued randomvariable Y, we can do this in a simple 2. Statisticians work with several kinds of averages. The ones that are used the most are the mean, the median, and the mode. The mean value of a set of numbers is perhaps the most familiar.

You get the mean value by adding up all the numbers in the collection and dividing by the number of terms in the sum. Think about the mean grade in the mid-termexam for this course. The median is also an interesting typical value of a set of data. The median is a number in the middle of the set of numbers, in the sense that an equal number of members of the set are below the median and above the median.

A third average is the mode of a set of numbers. The mode is the most common number in the collection of observations. There are as many or more numbers with that value than any other value. If there are two or more numbers with this property, the collection of observations is called multimodal. The sum of the ten grades is The median is 7 since there are four scores below 7 and four scores above 7.

The mode is 5 since that score occurs more often than any other. It occurs three times. We use probability models with random variables to characterize experiments with numerical outcomes. A parameter of a probability model corresponds to a statistic of a collection of outcomes. The most important of these is the expected value of a randomvariable, corresponding to the mean value of a collection of observations.

We will work with expectations throughout the course. A random variable can have several modes or medians. The expected value of a random variable corresponds to adding up a number of mea- surements and dividing by the number of terms in the sum.

Expectation is a synonymfor expected value. Sometimes the term mean value is also used as a synonym for expected value. We prefer to use mean value to refer to a statistic of a set of experimental outcomes the sum divided by the number of outcomes to distinguish it from expected value, which is a parameter of a probability model.

If you recall your 2. Think of point masses on a line with a mass of P X x kilograms at a distance of x meters from the origin. This is why P X x is called probability mass function. We denote the value that X takes on the i th trial by x i. We say that x 1 ,. Then the sum 2. After each trial, add up all the observations to date and divide by the number of trials. We prove in Chapter 7 that the result approaches the expected value as the number of trials increases without limit.

In Section 2. The next theoremprovides, without derivations, the expected values of binomial, Pascal, and discrete uniform random variables. Let the random variable K n be the number of successes in the n trials. The denominator is n k , also a product of k terms, all equal to n. Quiz 2. Voice calls cost 25 cents each and data calls cost 40 cents each. One example that occurs frequently is an experiment in which the procedure is to measure the power level of the received signal in a cellular telephone.

An observation is x, the power level in units of milliwatts. Because we obtain Y fromanother random variable, we refer to Y as a derived random variable. Based on experience, you have a probability model P X x for the number of pages in each fax you send.

The phone company offers you a new charging plan for faxes: It will not accept faxes longer than ten pages. The following function corresponds to the new charging plan. You can analyze this model to decide whether to accept the new plan.

In this section we determine the probability model of a derived random variable from the probability model of the original random variable. We use this information to obtain P Y y. Before we present the procedure for obtaining P Y y , we address an issue that can be confusing to students learning probability, which is the properties of P X x and g x.

Although they are both functions with the argument x, they are entirely different. P X x describes the probability model of a randomvariable. It has the special structure prescribed in Theorem2. On the other hand, g x can be any function at all. When we combine them to derive the probability model for Y, we arrive at a PMF that also conforms to Theorem2. To describe Y in terms of our basic model of probability, we specify an experiment consisting of the following procedure and observation: From s, find x, the corresponding value of X.

This procedure maps each experimental outcome to a number, y, that is a sample value of a random variable, Y. In this case, we consider all the possible values of y. It reduces to Equation 2. Find the PMF and expected value of Y, the charge for a fax. The experiment can be described by the following tree. Here each value of Y results in a unique value of X. Hence, we can use Equation 2. Applying Theorem 2. Find P Y y. The random variable N is the number of voice calls.

T cents is the cost of the three telephone calls monitored in the experiment. Instead, we can use the following property of expected values. As an exercise, you may want to compute E[Y] in Example 2. From this theorem we can derive some important properties of expected values. When students learn their own grades on a midterm exam, they are quick to ask about the class average. In the second term,. Another property of the expected value of a function of a random variable applies to linear transformations.

If we express the data random variable X in new units, the new average is just the old average transformed to the new units. It is tempting, but usually wrong, to apply it to other transformations. From Theorem 2. What is E[W]? It is one number that summarizes an entire probability model. Is it near the top of the class or somewhere near the middle? If this measure is small, observations are likely to be near the average.

A high measure of dispersion suggests that it is not unusual to observe events that are far from the average. The most important measures of dispersion are the standard deviation and its close relative, the variance. The variance of random variable X describes the difference between X and its expected value. A useful measure of the likely difference between X and its expected value is the expected absolute value of the difference, E[ Y ].

However, this parameter is not easy to work with mathematically in many situations, and it is not used frequently. The units of the variance are squares of the units of the random variable exampoints squared. If the standard deviation is 3 points, she is likely to be near the top. Var[X] is a central moment of X. Similarly, E[X 2 ] is the second moment.

We learn in Section 6. What is the variance of R? In order of increasing simplicity, we present three ways to compute Var[R]. Therefore, its expected value is also nonnegative. What are the expected value and variance of Y?

Multiplying a random variable by a constant is equivalent to a scale change in the units of measurement of the random variable. What is the variance of U? To use Theorem 2. By Theorem 2. In this section, we consider event A to be the observation of a particular value of a random variable. The conditioning event B contains information about X but not the precise value of X.

In general, we learn of the occurrence of an event B that describes some property of X. A conditioning event might be the event I that the fax contains an image. This collection of probabilities is a function of x. It is the conditional probability mass function of random variable X, given that B occurred. Here we extend our notation convention for probability mass functions. The name of a PMF is the letter P with a subscript containing the name of the random variable.

The argument of the function is usually the lowercase letter corresponding to the variable name. The argument is a dummy variable.

Proof The theorem follows directly from Theorem 1. If 40 percent of all seventy year olds have high blood pressure, what is the PMF of X? The next theorem uses Equation 2. What is the PMF of fax length in the second machine? Among the long faxes, each length has probability 0.

Sometimes instead of a letter such as B or L that denotes the subset of S X that forms the condition, we write the condition itself in the PMF.

Because the conditioning event B tells us that all possible outcomes are in B, we rewrite Theorem 2. Therefore, we can compute averages of the conditional random variable X B and averages of functions of X B in the same way that we compute averages of X. When we are given a family of conditional probability models P X B i x for an event space B 1 ,. It follows that the conditional variance and conditional standard deviation conform to Def- initions 2.

If the page has images event I , then N is uniformly distributed between 1 and 50 packets. If the page is just text event T , then N is uniform between 1 and 5 packets. Based on the calculation of the CDF, we then develop a method for generating randomsample values. Generating a randomsample is a simple simulation of a experiment that produces the corresponding random variable. The Matlabfunctions described in this section can be downloaded from the companion Web site. That is, for each requested x i , finitepmf returns the value P X x i. The Matlab function fax3pmf x implements P X x. We can then use fax3pmf to calculate the desired probabilities: Even this code is not perfect because Matlab has limited range.

For these large values of alpha, poissonpmf alpha,x will return zero for all x. Prob- lem 2. The following example shows an implementation of the Poisson CDF. Other CDFs are easily developed following the same approach. What is the probability of no more than hits in one minute?

What is the probability of more than hits in one minute? Let M equal the number of hits in one minute 60 seconds. As in Chapter 1, we use rand as a source of randomness. Recall that rand 1 simulates an experiment that is equally likely to produce any real number in the interval [0, 1]. Generally, this implies that before we can produce a sample of random variable K, we need to generate the CDF of K.

We nowpresent the details associated with generating binomial random variables. Table 2. Note that in each run, the relative frequencies are close to but not exactly equal to the corresponding PMF. We present an additional example, partly because it demonstrates some useful Matlab functions, and also because it shows howto generate relative frequency data for our random variable generators. In voltpower. The function pmfplot.

Figure 2. In Matlab notation, sx and px represent the vectors s X and p X. Essentially, we use sx, px , or equivalently s, p , to represent a randomvariable X described by the following experimental procedure: Finite PMF Roll an n-sided die such that side i has probability p i. It may seem unnecessary and perhaps even bizarre to allow these repeated values.

Although the next example was simple enough to solve by hand, it is instructive to use Matlab to do the work. Conditioning Matlab also provides the find function to identify conditions. For a discrete uniform 0, 10 random variable X, we will use Matlab to examine this convergence. Chapter Summary With all of the concepts and formulas introduced in this chapter, there is a high probability that the beginning student will be confused at this point.

Part of the problem is that we are dealing with several different mathematical entities including randomvariables, probability functions, and parameters. Before plugging numbers or symbols into a formula, it is good to know what the entities are. Note that X is the name of the random variable.

A possible observation is x, which is a number. S X is the range of X, the set of all possible observations x. The PMF gives the probability of observing any x. The expected value is a typical value of the random variable.

The variance describes the dispersion of sample values about the expected value. Find the PMF of Y, the num- ber of points scored in a 1 and 1 given that any free throw goes in with probability p, independent of any other free throw.

You are manager of a ticket agency that sells concert tickets. You assume that people will call three times in an attempt to buy tickets and then give up. Let p be the probability that a caller gets through to your ticket agency.

What is the minimum value of p necessary to meet your goal. In the ticket agency of Problem 2. If all agents are busy when some- one calls, the caller hears a busy signal. Suppose when a baseball player gets a hit, a single is twice as likely as a double which is twice as likely as a triple which is twice as likely as a home run.

Let B denote the number of bases touched safely during an at-bat. What is the PMF of B? The phone waits for a response and if none ar- rives within 0. What is the minimumvalue of n that produces a probability of 0. When the dog catches the Frisbee, it runs away with the Frisbee, never to be seen again.

The child continues to throw the Frisbee until the dog catches it. Let X denote the number of times the Frisbee is thrown. When the pager receives the message, it transmits an acknowl- edgment signal ACK to the paging system. If the paging system does not receive the ACK, it sends the message again. What is the mini- mum value of p necessary to achieve the goal? On hearing the wake-up signal, a meter transmits a message indicating the electric usage.

Each message is repeated eight times. Find the PMF of I. A radio station gives a pair of concert tickets to the sixth caller who knows the birthday of the per- former. For each person who calls, the probability is 0. All calls are independent.

In a packet voice communications system, a source transmits packets containing digitized speech to a receiver. Because transmission errors occasionally occur, anacknowledgment ACK or a nonacknowl- edgment NAK is transmitted back to the source to indicate the status of each received packet.

When the transmitter gets a NAK, the packet is retrans- mitted. Voice packets are delay sensitive and a packet can be transmitted a maximumof d times. If a packet transmission is an independent Bernoulli trial with success probability p, what is the PMF of T, the number of times a packet is transmitted? A ticket is a winner with probability p independent of the outcome of all other tickets. Let N i be the event that on day i you do not buy a ticket.

Let W i be the event that on day i , you buy a winning ticket. Let L i be the event that on day i you buy a losing ticket. The series ends as soon as one of the teams has won three games.

Assume that either team is equally likely to win any game indepen- dently of any other game played. What is the PMF of N? What is the CDF of N? In Problem 2. In Problem2. C is the cost of one telephone call. In each case, state an experiment, the sample space, the range of the random variable, the PMF of the random variable, and the expected value: At this casino, the only game is roulette and the only bets allowed are red and green.

You have the following strategy: To meet our goal of presenting the logic of the subject, we could set out the material as dozens of denitions followed by three axioms followed by dozens of theorems. Each theoremwould be accompaniedby a complete proof.

While rigorous, this approach would completely fail to meet our second aimof conveying the intuition necessary to work on practical problems. To address this goal, we augment the purely mathematical material with a large number of examples of practical phenomena that can be analyzed by means of probability theory.

We also interleave denitions and theorems, presenting some theorems with complete proofs, others with partial proofs, and omitting some proofs altogether.

We nd that most engineering students study probability with the aim of using it to solve practical problems, and we cater mostly to this goal. We also encourage students to take an interest in the logic of the subject it is very elegant and we feel that the material presented will be sufcient to enable these students to ll in the gaps we have left in the proofs. Therefore, as you read this book you will nd a progression of denitions, axioms, theorems, more denitions, andmore theorems, all interleavedwith examples andcomments designed to contribute to your understanding of the theory.

We also include brief quizzes that you should try to solve as you read the book. Each one will help you decide whether you have grasped the material presented just before the quiz. The problems at the end of each chapter give you more practice applying the material introduced in the chapter.

They vary considerably in their level of difculty. Some of them take you more deeply into the subject than the examples and quizzes do. Most people who study proba- bility have already encountered set theory and are familiar with such terms as set, element, 1. For them, the following paragraphs will review ma- terial already learned and introduce the notation and terminology we use here. For people who have no prior acquaintance with sets, this material introduces basic denitions and the properties of sets that are important in the study of probability.

A set is a collection of things. We use capital letters to denote sets. The things that together make up the set are elements. When we use mathematical notation to refer to set elements, we usually use small letters.

## Related titles

Thus we can have a set A with elements x, y, and z. It teaches students how to apply probability theory to solving engineering problems. To exhibit the logic of the subject, we showclearly in the text three categories of theoretical material: denitions, axioms, and theorems. Denitions establish the logic of probability theory, while axioms are facts that we accept without proof. Theorems are consequences that follow logically from denitions and axioms.

Each theorem has a proof that refers to denitions, axioms, and other theorems. Although there are dozens of denitions and theorems, there are only three axioms of probability theory. These three axioms are the foundation on which the entire subject rests. To meet our goal of presenting the logic of the subject, we could set out the material as dozens of denitions followed by three axioms followed by dozens of theorems.

Each theoremwould be accompaniedby a complete proof. While rigorous, this approach would completely fail to meet our second aimof conveying the intuition necessary to work on practical problems. To address this goal, we augment the purely mathematical material with a large number of examples of practical phenomena that can be analyzed by means of probability theory.

We also interleave denitions and theorems, presenting some theorems with complete proofs, others with partial proofs, and omitting some proofs altogether.

We nd that most engineering students study probability with the aim of using it to solve practical problems, and we cater mostly to this goal. We also encourage students to take an interest in the logic of the subject it is very elegant and we feel that the material presented will be sufcient to enable these students to ll in the gaps we have left in the proofs. Therefore, as you read this book you will nd a progression of denitions, axioms, theorems, more denitions, andmore theorems, all interleavedwith examples andcomments designed to contribute to your understanding of the theory.

We also include brief quizzes that you should try to solve as you read the book. Each one will help you decide whether you have grasped the material presented just before the quiz. The problems at the end of each chapter give you more practice applying the material introduced in the chapter. They vary considerably in their level of difculty. Some of them take you more deeply into the subject than the examples and quizzes do. Most people who study proba- bility have already encountered set theory and are familiar with such terms as set, element, 1.In an experiment, we may say that an event occurs when a certain phenomenon is observed.

Choose a k-permutation of the k objects in the k-combination. Adopting the vocabulary of Section 1. How many students are seated in the fourth row? Show More.