APPLICATION OF RENEWAL REWARD PROCESSES IN HOMOGENEOUS DISCRETE MARKOV CHAIN

A renewal process which is a special type of a counting process, which counts the number of events that occur up to (and including) time 𝑡 has been investigated, in order to provide some insight into the performance measures in renewal process and sequence such as, the mean time between successive renewals, 𝑀(𝑡) ; Laplace-Stiltjes transform (LST) of the mean time, 𝑀̃(𝑡) ; the Laplace-Stieltjes transform (LST) of the mean time distribution function, 𝐹̃ 𝑋 (𝑡) ; Laplace-Stiltjes transform (LST) of 𝑛 fold convolution of distribution function, 𝐹̃ 𝑋(𝑛) (𝑡) ; the time at which the 𝑛 𝑡ℎ renewal occurs, 𝑆 𝑛 ; the average number of renewals per unit time over the interval (0 , t ], 𝑁(𝑡) 𝑡⁄ and expected reward, 𝐸[𝑅] . Our quest is to analyse the distribution function of the renewal process and sequence {𝑋 𝑛 , 𝑛 ≥ 𝑖} using the concept of discrete time Markov chain to obtain the aforementioned performance measures. Some properties of the Erlang 𝑘 , exponential and geometric distributions are used with the help of some existing laws, theorems and formulas of Markov chain. We concluded our study through illustrative examples that, it is not possible for an infinite number of renewals to occur in a finite period of time; Also, the expected number of renewals increases linearly with time; and from the uniqueness property, we affirmed that, the Poisson process is the only renewal process with a linear mean-value function; and lastly, we obtained the optimal replacement policy for a manufacturing machine which showed that, the exponential property of the lifetime distribution holds that at any point in time, the remaining lifetime is identical to the lifetime of a brand new machine, while the long run rate of reward is equal to the expected reward per cycle divided by the mean cycle length.


INTRODUCTION
A renewal process is a special type of counting process, while a counting process, {N(t), t ≥ 0}, is a stochastic process which counts the number of events that occur up to (and including) time . Thus any counting process N(t) is integer valued with the properties that, ∀ t ≥ 0 , N(t 1 ) ≤ N(t 2 ) t 1 ≤ t 2 . {X n , n ≥ 1} is the random variable that denotes the time which elapses between events ( − 1) and . Thus, a renewal process is a counting process {N(t), t ≥ 0} if the sequence {X 1 , X 2 , ⋯} of nonnegative random variables that represent the time between events are independent and identically distributed. Observe that this definition permits events to occur simultaneously( = 0), but we shall restrict ourselves to renewal processes where this cannot happen. In our definition, 1 , the time until the first event, is independent of, and has the same distribution as, all other random variables , > 1. This may be interpreted in terms of a zeroth event which occurs at time = 0, giving X 1 the same inter-event time meaning as the other X i . An alternative approach is to assume that the time until the occurrence of the first event has a different distribution from the random variables X i , i > 1. The word "renewal" is appropriate since on the occurrence of each event, the process essentially renews itself: at the exact moment of occurrence of an event, the distribution of time until the next event is independent of everything that has happened previously. The term "recurrent process" is sometimes used in place of "renewal process. Romanovsky (1970) established the application and simulation of discrete Markov Chains and Moler and an Van Loan (1978) explain the nineteen dubious ways to compute the exponential of a matrix while Saff (1973) explained the degree of the best rational approximation to the exponential function and Philippe and Sidje (1993) derived the transient solution of Markov Processes by Krylov Subspaces, whereas Stewart (1994Stewart ( , 2009) discussed the development of Numerical Solutions of Markov chains, while Pesch et al.(2015) demonstrated the appropriateness of the Markov chain technique in the wind feed in Germany and Agboola (2016) demonstrated the inclusion of Markov chain in repair problem while Uzun and Kiral (2017) used the Markov chain model of fuzzy state to anticipate the direction of gold price movement and to estimate the probabilistic transition matrix of gold price closing returns. A whereas Aziza et al. (2019) used the Markov chain model of fuzzy state to predict monthly rainfall data. Clement (2019) demonstrated the application of Markov chain to the spread of disease infection, demonstrating that Hepatitis B became more infectious over time than tuberculosis and HIV, while Vermeer and Trilling (2020) demonstrated the application of Markov chain to journalism. However, in this study, Application of Renewal Reward Processes in Homogeneous Discrete Markov Chain is considered. , sequence of independent and identically distributed random variables; { , ≥ 0}, homogeneous, discrete-time Markov chain and , the mean time between successive renewals.

MATERIALS AND METHODS
The study area consisted of analysis of renewal process, inter-renewal event and renewal sequence using the concept of discrete time Markov chain. We started with examples of renewal process such as follows i. Suppose the inter-arrival times of pieces of junk mail are independent and identically distributed. Then {N(t), t ≥ 0}, the number of pieces of junk mail that have arrived by time t, is a renewal process. ii. Assume an infinite supply of standard flashlight batteries whose lifetimes are independent and identically distributed. As soon as one battery dies it is replaced by another. Then {N(t), t ≥ 0}, the number of batteries that have failed by time t, is a renewal process. iii. A biased coin is tossed at times = 1, 2, ⋯ . The probability of a head appearing at any time is , 0 < < 1. Then {N(t), t ≥ 0}, with N(0) = 0, the number of heads obtained up to and including time t, is a renewal process. The time between renewals in this case all have the same geometric probability distribution function: The renewal process that results is called a binomial process. Let { ( ), ≥ 0} be a renewal process with inter-event (or interrenewal) periods 1 , 2 , ⋯ and let be the time at which the ℎ event/renewal occurs, i.e., 0 = 0, = 1 + 2 + ⋯ + , ≥ 1.
In other words, the process renews itself for the ℎ time at time . The sequence { , ≥ 0} is called a renewal sequence. The period between renewals is called a cycle; a cycle is completed the moment a renewal occurs. To begin the analysis of the distribution function of the random variables { , ≥ } so as to obtain the probability of exactly renewals by time , as well as Laplace-Stieltjes transform (LST) of the renewal process { ( ), ≥ 0}. Let { , ≥ 0} be a homogeneous, discrete-time Markov chain whose state space is the nonnegative integers and assume that at time = 0, the chain is in state . denote the time at which the ℎ visit to state begins. { = − −1 , ≥ 1} is a sequence of independent and identically distributed random variables, then it follows that { , ≥ 0} is a renewal sequence and { ( ), ≥ 0}, the number of visits to state in (0, ], is a renewal process associated with state . The initial visit to state k (the process starting at time = 0 in state ) must not be included in this count. A similar statement can be made with respect to a continuous-time Markov chain. Let the distribution function of the random variables { , ≥ } = ( ), and let us find the distribution function of the renewal process { ( ), ≥ 0}. We first note that since the random variables are independent and identically distributed, the distribution of the renewal sequence = 1 + 2 + ⋯ + is given as ( ) ( ), the n-fold convolution of ( ) with itself. The only way the number of renewals can exceed or equal at time { ( ), ≥ 0} is if the ℎ renewal occurs no later than time ( ≤ ). The converse is also true: the only way that the ℎ renewal can occur no later than time is if the number of renewals prior to is at least equal to . This means that ( ) ≥ if and only if ( ≤ ) and we may write (3) Also, let ( ) be the distribution function that results when two random variables having distribution functions ( ) and ( ) are added, then the convolution is defined as Where ( ) and ( ) are the corresponding probability functions.
Therefore, the probability of exactly renewals by time is given by and using Equation (1) we conclude

Renewal Reward Processes
Consider a renewal process { ( ), ≥ 0} and let be the ℎ inter-renewal time. Assume that on the completion of a cycle, a "reward" is received or alternatively a "cost" is paid. Let be the reward (positive or negative) obtained at the ℎ renewal. We assume that the rewards are independent and identically distributed. This does not prevent from depending on the length of the cycle in which the reward is earned. The total reward received by time t is given by ( ) = ∑ ( ) =1 .
(7) Observe that, if = 1 for all , then ( ) = ( ), the original renewal process. We now show that where [ ] is the expected reward obtained in any cycle, and [ ] is the expected duration of a cycle. We have

APPLICATION OF RENEWAL REWARD …
This tells us that the long run rate of reward is equal to the expected reward per cycle divided by the mean cycle length. It may also be shown that lim i.e., the expected reward per unit time in the long run is also equal to the expected reward per cycle divided by the mean cycle length. Renewal reward models arise in the context of an ongoing process in which a product, such as a car or piece of machinery, is used for a period of time (a cycle) and then replaced. In order to have a renewal process, the new car is assumed to have identical characteristics (more precisely, an identical lifetime function) to the one that is replaced. A replacement policy specifies a recommended time at which to purchase the new product and the cost 1 of doing so at this time. A cost 2 over and above the replacement cost 1 must be paid if for some reason (e.g., the car breaks down) replacement must take place prior to the recommended time . In some scenarios, a third factor, the resale value of the car, is also included. Let { , ≥ 1}, be the lifetime of the ℎ machine and assume that the , are independent and identically distributed with probability distribution function ( ). If is the time of the ℎ replacement, then the sequence { , ≥ 0} is a renewal sequence. Let be the time between two replacements, i.e., is the duration of the ℎ cycle and we have = { , }.
(12) In the absence of a resale value, the reward (or more properly, cost) is given by and ( ), the total cost up to time t, is a renewal reward process: , ( ) ≥ 0.
(14) Using to denote the duration of an arbitrary cycle and the lifetime of an arbitrary machine, the expected cost per cycle is It makes sense to try to find the optimal value of , the value which minimizes the expected cost. If is chosen to be small, the number of replacements will be high but the cost due to failure will be small. If is chosen to be large, then the opposite occurs.

Results and Discussions
This section discusses the derivation of formulae for performance measures such the mean time between successive renewals, ( ) = [ , ≥ 1], Laplace-Stieltjes transform (LST) of ( ), ̃( ), the Laplace-Stieltjes transform (LST) of distribution function ( ), ̃( ), Laplace-Stieltjes transform (LST) of fold convolution of ( ), ̃( ) ( ), the time at which the ℎ renewal occurs, , the average number of renewals per unit time over the interval (0, t], ( ) ⁄ , expected reward, [ ] Illustrative example 1: Assume that inter-renewal times are exponentially distributed with parameter λ, i.e., an Erlang distribution, and the renewal process { ( ), ≥ 0} has a Poisson distribution and is called a Poisson process. We now show that it is not possible for an infinite number of renewals to occur in a finite period of time. Let = [ , ≥ 1] be the mean time between successive renewals. Since { , ≥ } are nonnegative random variables and we have chosen the simplifying assumption that { = 0} = (0) < 1, it follows that must be strictly greater than zero, i.e., 0 < ≤ ∞. In instances in which the mean inter-renewal time is infinite, we shall interpret 1/ as zero. We now show that For strong law of large numbers And since 0 < ≤ ∞, must tend to infinity as n tends to infinity: Now using the fact that ( ) = max{ : ≤ }, we find, for finite , The result is for finite only and does not hold for → ∞. When → ∞ we have { lim →∞ ( ) = ∞}.
Finding the probability distribution of an arbitrary renewal function can be difficult so that frequently only [ ( )], the expected number of renewals by time , is computed. The mean E[N(t)] is called the renewal function and is denoted by ( ). We have We now show that ( ) uniquely determines the renewal process. Let so that once ̃( ) is known completely, so also is ̃( ), and vice versa. Since a renewal process is completely characterized by ( ), it follows that it is also completely characterized by the renewal function ( ). Continuing with the renewal function, observe that We may conclude that, when the expected number of renewals increases linearly with time, the renewal process is a Poisson process. Furthermore, from the uniqueness property, the Poisson process is the only renewal process with a linear mean-value (renewal) function. We now give two important results on the limiting behavior of renewal processes. We have previously seen that the limit as tends to infinity of ( ) is infinite. Our first result concerns the rate at which ( ) → ∞. Observe that ( )/ is the average number of renewals per unit time. We now show that, with probability 1, ( ) → 1 as → ∞, and point out why 1 is called the rate of the renewal process. Recall that is the time at which the ℎ renewal occurs. Since N(t) is the number of arrivals that occur prior to or at time t, it must follow that ( ) is just the time at which the last renewal prior to or at time occurred. Likewise, ( )+1 is the time at which the first renewal after time t occurs. Therefore Since ( ) → ∞ when → ∞. Also, by means of a similar argument, lim