Dr. Salvador A. Gezan

21 June 2022At the present, with the fast development of genotyping, we have access to genomic data that we can use with different statistical and computational approaches, for example, to accelerate genetic gains and to select outstanding genotypes as parents for commercial release. Genomic data is also useful to assess genetic variability and diversity in the context of breeding programs or other research studies.

Most of the genomic data used for breeding comes from single nucleotide polymorphism (SNP). Typically, we see this data as a matrix containing several individuals, often thousands, genotyped for a number of SNPs (i.e., nucleotide readings AA, AC, TT, etc.). Depending on the quality and characteristics of this data, we can use it for different analytical purposes.

In this blog, we will describe some of the uses of this SNP data. We will focus mainly on the available number of SNPs (i.e., markers) and what analyses these enable. Note, the classification of low-, medium- and high-density panels is somewhat arbitrary but it’s what’s often used in plant breeding.

Let’s start with what is known as low-density (LD) panels. These typically contain fewer than 200 SNPs, and are the cheapest option you can have (often just a couple of US dollars per sample). With this small number of SNP markers, their main use is for quality assurance (QA) and quality control (QC). That is, they can be used for:

**Verification of Crosses.**If parents are genotyped, then it is possible to check that the correct crosses were performed.**Parentage Reconstruction.**If parents are genotyped, it is possible to reconstruct the full pedigree of a group of offspring.**Marker Assisted Selection**. If a preliminary group of markers were identified to be associated with one or more QTLs of interest, these can easily be incorporated into the panel and used to discriminate genotypes.**Population****Assignment.**A group of markers can be used to assign individuals to different populations (e.g., origins).**Assign Sex**. Depending on the dynamics of the organism, sometimes it is possible to identify one or more markers that can be used to assign sex to the individuals before they mature.

Other uses are possible but they can be limited. For example, we could identify sibships on some individuals (e.g., full-sibs), and it may be possible to calculate some population statistics of genetic diversity. However, these uses can yield high levels of uncertainty.

An important aspect of LD panels it that they often have markers for verification, and in addition some markers will be missing due to genotyping issues. Hence, their effective size will be lower than their nominal size. Another important consideration is that often these commercial panels are constructed for general use; hence, they may not exactly be based on the population of interest. This will lead to some fixated (MAF = 0%) or almost fixated (MAF < 2%) markers that have little or no contribution to the above uses.

However, despite the above shortcomings, LD panels constitute a very good and cheap alternative for some small programs to start considering genomic tools in their breeding; and for example, verifying crosses is a critical step!

The medium-density (MD) panels are defined very differently depending on the field. Here, we will consider the plant breeding definition of between 2,000 to 10,000 SNPs (i.e., 2K to 10K). In animal breeding, MD panels will be at least 5 times larger. This larger number of SNPs offers more opportunities, but the cost of an MD panel is several times greater than an LD panel. This increased cost often limits, for most breeding programs, the number of individuals to be genotyped. However, they offer more genomic analysis opportunities. In addition to the uses for LD panels mentioned above, for MD panels we also have:

**Genomic Relationship Matrix Estimation.**It is now possible to estimate with reasonable accuracy the relatedness between any pair of individuals. And these matrices can be used for many other objectives.**Genomic Prediction Models.**MD panels allows us to fit genomic prediction (GP) models. However, these tend to be on the low level of accuracy, but still sufficient and useful for operational use.**Marker Imputation.**Missing marker data, if complemented with individuals genotyped using a high-density (HD) panel, can be imputed successfully, and that data can be used for other purposes, like genomic prediction.**Genetic Linkage Maps.**This large number of markers allows for the construction of reliable linkage (or genetic) maps with plenty of other uses, such as imputation.**QTL Analysis.**This MD panel has a reasonable number of markers to perform QTL analysis, for example on recombinant inbred lines (RIL) populations.**Diversity Studies.**It is easier to perform several genomic studies that deal with diversity, as it is now possible to follow over generations, for example, inbreeding, effective population size, etc.

In addition, MD panels can also include several markers previously detected for use in Marker Assisted Selection (MAS) or for sex determination. Given the larger number of SNPs, the presence of verification, missing or fixed markers is of less concern, but should be kept under control in order to maximize the usefulness of the panel. Good or poor selection of SNP markers can make a large difference on the life of the panel and the accuracy of the GP models. Hence, it is recommend to ensure markers are well selected for the construction of these panels.

At the present, MD panels should be the panel of choice for most breeding operations. These panels will ensure that the collected data is reasonably good for use in the future, especially as technology and prices will change, justifying the genotyping investment in the long term. In addition, these represent the best panels for breeding programs to start applying (and playing) with more sophisticated genomic tools, such as genomic prediction with GBLUP and/or Bayes B.

Finally, we have the high-density (HD) panels with more than 20,000 SNPs (20K). But again, this is relative. Most HD panels for plants and aquatic species are in the range of 30K to 70K, but in some species the size considered is in excess of 700K (as in dairy) or even in the millions (as in humans). The cost of an HD panel varies greatly with the number of SNPs considered, the technology used, and number of individuals to genotype. The uses that these panels provide, in addition to all the previous uses for LD and MD panels, are:

**High Accuracy Genomic Prediction Models.**We expect greater accuracy from these panels, possible with a long useful life not requiring tuning over time, and little or no imputation required.**Genome-wide Association Studies (GWAS).**This is the key data for the discovery of markers under GWAS studies, where it will be more likely to find a marker positioned directly on a coding area within a functional gene.

These HD panels often are very redundant in their information. Hence, they tend to present a good portion of fixed alleles, and missing data that rules many markers out. However, they are still sufficiently large enough to provide all required above mentioned uses. It is also possible that these panels can be used by a combination of programs or research groups, allowing: 1) pooling of resources, 2) price negotiation, and 3) sharing of genomic information. This is commonly seen in some genomic consortiums. In addition, these HD panels, if originally constructed using a diverse base population, might not require much future revisions.

Another important aspect of HD panels is that they constitute the raw information for imputation in MD panels (and, although only remotely possible, in LD panels) and they will be the ones that in the future will link the array of available panels used in a breeding program (including, LD, MD or from different groups). Hence, HD panels allow us to connect different sources of genomic data.

One important note is in relation to the number of markers required for genomic selection (GS). We recommend at least an MD panel is used for this, with no fewer than 2K useful (post-filtering) SNPs. It is interesting to note that some studies have reported that dropping the number of SNPs to 1K or fewer results in a considerable loss of accuracy of the GP models. And, also, often more than 10K SNPs does not yield considerably better accuracies than 5K SNPs. In addition, some studies have successfully focused on 2-3K SNP panels supported with imputation from an HD panel, with an interesting increase in accuracy. Hence, there are plenty of options to exploit MD panels.

Another aspect is in relation to maximizing the accuracy of GP models. Of course, the more informative SNPs available the better! But there are many other aspects that affect, in good or bad ways, the success of a genomic model. For example, the level of relatedness between the training and evaluation populations, linkage-disequilibrium, genetic architecture of the traits, and of course heritability of the traits of interest. All of these elements may eventually tilt the decision from one type of panel to another.

Another difficulty that can arise is the detection of markers associated with some traits that are present in the population at very low rates, such as the case of ‘standing genetic variation’. This implies that it is difficult to find these markers on most panels as they will tend to be dropped early. Therefore, specific or very large HD panels might be needed in these cases.

Finally, as mentioned before, the use of LD or MD panels requires a careful pre-selection of markers to make the most of these panels. If this is done poorly, or for another population (e.g., using an available panel developed for another breeding group), then the benefits of the corresponding panels will possibly be greatly reduced. This also implies that, particularly for the LD panel, the set of markers in use has to be constantly reviewed as the population changes over time or new markers from MAS or sex determination are discovered.

In this blog, we have pointed out a few of the uses and benefits of each of the different panels. As the cost and offering of these panels changes constantly, we suspect that at some point we will be able to afford HD panels for a few cents (or pennies)! But before we get there, we need to make the most of our current resources, and gathering the right data for the right analysis is critical.

Dr. Salvador Gezan is a statistician/quantitative geneticist with more than 20 years’ experience in breeding, statistical analysis and genetic improvement consulting. He currently works as a Statistical Consultant at VSN International, UK. Dr. Gezan started his career at Rothamsted Research as a biometrician, where he worked with Genstat and ASReml statistical software. Over the last 15 years he has taught ASReml workshops for companies and university researchers around the world.

Dr. Gezan has worked on agronomy, aquaculture, forestry, entomology, medical, biological modelling, and with many commercial breeding programs, applying traditional and molecular statistical tools. His research has led to more than 100 peer reviewed publications, and he is one of the co-authors of the textbook *Statistical Methods in Biology: Design and Analysis of Experiments and Regression*.

Related Reads

Dr. Salvador A. Gezan

09 March 2022Meta analysis using linear mixed models

Meta-analysis is a statistical tool that allows us to combine information from related, but independent studies, that all have as an objective to estimate or compare the same effects from contrasting treatments. Meta-analysis is widely used in many research areas where an extensive literature review is performed to identify studies that had a similar research question. These are later combined using meta-analysis to estimate a single combined effect. Meta-analyses are commonly used to answer healthcare and medical questions, where they are widely accepted, but they also are used in many other scientific fields.

By combining several sources of information, meta-analyses have the advantage of greater statistical power, therefore increasing our chance of detecting a significant difference. They also allow us to assess the variability between studies, and help us to understand potential differences between the outcomes of the original studies.

The underlying premise in meta-analysis is that we are collecting information from a group of, say * n*, studies that individually estimated a parameter of interest, say

In meta-analysis, the target population parameter * θ* can correspond to any of several statistics, such as a treatment mean, a difference between treatments; or more commonly in clinical trials, the log-odds ratio or relative risk.

There are two models that are commonly used to perform meta-analyses: the fixed-effect model and the random-effects model. For the fixed-effect model, it is assumed that there is only a single unique true effect our single * θ* above, which is estimated from a random sample of studies. That is, the fixed-effect model assumes that there is a single population effect, and the deviations obtained from the different studies are only due to sampling error or random noise. The linear model (LM) used to describe this process can be written as:

where * is each individual observed response, ** is the population parameter (also often known as ** μ*, the overall mean), and

For the random-effects model we still assume that there is a common true effect between studies, but in addition, we allow this effect to vary between studies. Variation between these effects is a reasonable assumption as no two studies are identical, differing in many aspects; for example, different demographics in the data, slightly differing measurement protocols, etc. Because, we have a random sample of studies, then we have a random sample of effects, and therefore, we define a linear mixed model (LMM) using the following expression:

where, as before, * is each individual observed response, ** is the study-specific population parameter, with the assumption of ** and ** is a random error or residual with the same normality assumptions as before. Alternatively, the above LMM can be written as:*

where * and ** is a random deviation from the overall effect mean ** θ* with assumptions

This is a LMM because we have, besides the residual, an additional random component that has a variance component associated to it, that is * or ***.** This variance is a measurement of the variability ‘between’ studies, and it will reflect the level of uncertainty of observing a specific *. These LMMs can be fitted, and variance components estimated, under many linear mixed model routines, such as ***nlme** in R, **proc mixed** in SAS, Genstat or ASReml.

Both fixed-effect and random-effects models are often estimated using summary information, instead of the raw data collected from the original study. This summary information corresponds to estimated mean effects together with their variances (or standard deviations) and the number of samples or experimental units considered per treatment. Since the different studies provide different amounts of information, weights should be used when fitting LM or LMM to summary information in a meta-analysis, similar to weighted linear regression. In meta-analysis, each study has a different level of importance, due to, for example, differing number of experimental units, slightly different methodologies, or different underlying variability due to inherent differences between the studies. The use of weights allows us to control the influence of each observation in the meta-analysis resulting in more accurate final estimates.

Different statistical software will manage these weights slightly differently, but most packages will consider the following general expression of weights:

where * is the weight and ** is the variance of the observation. For example, if the response corresponds to an estimated treatment mean, then its variance is **, with ** MSE* being the mean square error reported for the given study, and

Therefore, after we collect the summary data, we fit our linear or linear mixed model with weights and request from its output an estimation of its parameters and their standard errors. This will allow us to make inference, and construct, for example, a 95% confidence interval around an estimate to evaluate if this parameter/effect is significantly different from zero. This will be demonstrated in the example below.

The dataset we will use to illustrate meta-analyses was presented and previously analysed by Normand (1999). The dataset contains infromation from nine independent studies where the length of hospitalisation (measured in days) was recorded for stroke patients under two different treatment regimes. The main objective was to evaluate if specialist inpatient stroke care (**sc**) resulted in shorter stays when compared to the conventional non-specialist (or routine management) care (**rm**).

The complete dataset is presented below, and it can also be found in the file STROKE.txt. In this table, the columns present for each study are the sample size (**n.sc** and **n.rm**), their estimated mean value (**mean.sc** and **mean.rm**) together with their standard deviation (**sd.sc** and **sd.rm**) for both the specialist care and routine management care, respectively.

We will use the statistical package R to read and manipulate the data, and then the library ASReml-R (Butler *et al*. 2017) to fit the models.

First, we read the data in R and make some additional calculations, as shown in the code below:

`STROKE <- read.table("STROKE.TXT", header=TRUE)` |

`STROKE$diff <- STROKE$mean.sc - STROKE$mean.rm ` `STROKE$Vdiff <- (STROKE$sd.sc^2/STROKE$n.sc) + (STROKE$sd.rm^2/STROKE$n.rm) ` `STROKE$WT <- 1/(STROKE$Vdiff) ` |

The new column **diff** contains the difference between treatment means (as reported from each study). We have estimated the variance of this mean difference, **Vdiff**, by taking from each treatment its individual **MSE** (mean square error) and dividing it by the sample size, and then summing the terms of both treatments. This estimate assumes, that for a given study, the samples from both treatments are independent, and for this reason we did not include a covariance. Finally, we have calculated a weight (**WT**) for each study as the inverse of the variance of the mean difference (*i.e.*, **1/Vdiff**).

We can take another look at this data with these extra columns:

The above table shows a wide range of values between the studies in the mean difference of length of stay between the two treatments, ranging from as low as **−71.0** to **11.0**, with a raw average of **−15.9**. Also, the variances of these differences vary considerably, which is also reflected in their weights.

The code to fit the fixed-effect linear model using ASReml-R is shown below:

`library(asreml) ` `meta_f<-asreml(fixed=diff~1, ` ` weights=WT, ` ` family=asr_gaussian(dispersion=1), ` ` data=STROKE)` |

In the above model, our response variable is **diff**, and the weights are indicated by the variate **WT**. As the precisions are contained within the weights the command **family** is required to fix the residual error (**MSE**) to exactly **1.0**, hence, it will not be estimated.

The model generates output that can be used for inference. We will start by exploring our target parameter, *i.e.* * θ*, by looking at the estimated fixed effect mean and its standard error. This is done with the code:

`meta_effect <- summary(meta_f, coef=TRUE)$coef.fixed` |

Resulting in the output:

The estimate of * θ* is equal to

`wald.asreml(meta_f)` |

Note that this is approximated as, given that weights are considered to be known, the degrees of freedom are assumed to be infinite; hence, this will be a liberal estimate.

The results from this ANOVA table indicate a high significance of this parameter (* θ*) with an approximated p-value of <

However, as indicated earlier, a random-effects model might seem more reasonable given the inherent differences in the studies under consideration. Here, we extend the model to include the random effect of study. In order to do this, first we need to ensure that this is treated as a factor in the model by running the code:

`STROKE$study <- as.factor(STROKE$study)_f)` |

The LMM to be fitted using ASReml-R is:

`meta_r<-asreml(fixed=diff~1, ` ` random=~study, ` ` weights=WT, ` ` family=asr_gaussian(dispersion=1), ` ` data=STROKE)` |

Note in this example the only difference from the previous code is the inclusion of the line **random=~study**. This includes the factor study as a random effect. An important result from running are the variance component estimates. These are obtained with the command:

`summary(meta_r)$varcomp` |

In this example, the variance associated with the differences in the target parameter (* θ*) between the studies is

We can output the fixed and random effects using the following commands:

`meta_effect <- summary(meta_r, coef=TRUE)$coef.fixed ` `BLUP <- summary(meta_r, coef=TRUE)$coef.random` |

Note that now that our estimated mean difference corresponds to **−15.106** days with an standard error of **8.943**, and that the approximate 95% confidence interval [**−32.634;2.423**] now contains zero. An approximated ANOVA based on the following code:

`wald.asreml(meta_r)` |

results in the output:

We have a p-value of **0.0912**, indicating that there is no significant difference in length of stay between the treatments evaluated. Note that the estimates of the random effects of study, also known as BLUPs (best linear unbiased predictions) are large, ranging from **−45.8** to **22.9**, and widely variable. The lack of significance in the random-effects model, when there is a difference of **−15.11** days, is mostly due to the large variability of **684.62** found between the different studies, resulting in a substantial standard error for the estimated mean difference.

In the following graph we can observe the 95% confidence intervals for each of the nine studies together with the final parameter estimated under the *Random-effects* Model. Some of these confidence intervals contain the value zero, including the one for the random-effects model. However, it can be observed that the confidence interval from the random-effects model is an adequate summarization of the nine studies, representing a compromising confidence interval.

An important aspect to consider is the difference in results between the fixed-effect and the random-effects model that are associated, as indicated earlier, with different inferential approaches. One way to understand this is by considering what will happen if a new random study is included. Because we have a large variability in the study effects (as denoted by ), we expect this new study to have a difference between treatments that is randomly within this wide range. This, in turn, is expressed by the large standard error of the fixed effect * θ*, and by its large 95% confidence interval that will ensure that for ‘any’ observation we cover the parameter estimate 95% of the time. Therefore, as shown by the data, it seems more reasonable to consider the random-effects model than the fixed-effect model as it is an inferential approach that deals with several sources of variation.

In summary, we have used the random-effects model to perform meta-analysis on a medical research question of treatment differences by combining nine independent studies. Under this approach we assumed that all studies describe the same effect but we allowed for the model to express different effect sizes through the inclusion of a random effect that will vary from study to study. The main aim of this analysis was not to explain why these differences occur, here, our aim was to incorporate a measure of this uncertainty on the estimation of the final effect of treatment differences.

There are several extensions to meta-analysis with different types of responses and effects. Some of the relevant literature recommended to the interested reader are van Houwelingen *et al*. (2002) and Vesterinen et al. (2014). Also, a clear presentation with further details of the differences between fixed-effect and random-effects models is presented by Borenstein *et al*. (2010).

Dataset: STROKE.txt

R code: STROKE_METAA.R

Borenstein, M; Hedges, LV; Higgins, JPT; Rothstein, HR. 2010. *A basic introduction to fixed-effect and random-effects models for meta-analysis.* Research Synthesis Methods 1: 97-111.

Butler, DG; Cullis, BR; Gilmour, AR; Gogel, BG; Thompson, R. 2017. *ASReml-R Reference Manual Version 4.* VSNi International Ltd, Hemel Hempstead, HP2 14TP, UK.

Normand, ST. 1999. Meta-analysis: *Formulating, evaluating, combining, and reporting. Statistics in Medicine 18: 321-359.*

van Houwelingen, HC; Arends, LR; Stignen, T. 2002. *Advanced methods in meta-analysis: multivariate approach and meta-regression.* Statistics in Medicine 21: 589-624.

Vesterinen, HM; Sena, ES; Egan, KJ; Hirst, TC; Churolov, L; Currie, GL; Antonic, A; Howells, DW; Macleod, MR. 2014. *Meta-analysis of data from animal studies: a practical guide*. Journal of Neuroscience Methods 221: 92-102.

Salvador Gezan is a statistician/quantitative geneticist with more than 20 years’ experience in breeding, statistical analysis and genetic improvement consulting. He currently works as a Statistical Consultant at VSN International, UK. Dr. Gezan started his career at Rothamsted Research as a biometrician, where he worked with Genstat and ASReml statistical software. Over the last 15 years he has taught ASReml workshops for companies and university researchers around the world.

Dr. Gezan has worked on agronomy, aquaculture, forestry, entomology, medical, biological modelling, and with many commercial breeding programs, applying traditional and molecular statistical tools. His research has led to more than 100 peer reviewed publications, and he is one of the co-authors of the textbook “Statistical Methods in Biology: Design and Analysis of Experiments and Regression”.

Kanchana Punyawaew

01 March 2021Linear mixed models: a balanced lattice square

This blog illustrates how to analyze data from a field experiment with a balanced lattice square design using linear mixed models. We’ll consider two models: the balanced lattice square model and a spatial model.

The example data are from a field experiment conducted at Slate Hall Farm, UK, in 1976 (Gilmour *et al*., 1995). The experiment was set up to compare the performance of 25 varieties of barley and was designed as a balanced lattice square with six replicates laid out in a 10 x 15 rectangular grid. Each replicate contained exactly one plot for every variety. The variety grown in each plot, and the coding of the replicates and lattice blocks, is shown in the field layout below:

There are seven columns in the data frame: five blocking factors (*Rep, RowRep, ColRep, Row, Column*), one treatment factor, *Variety*, and the response variate, *yield*.

The six replicates are numbered from 1 to 6 (*Rep*). The lattice block numbering is coded within replicates. That is, within each replicates the lattice rows (*RowRep*) and lattice columns (*ColRep*) are both numbered from 1 to 5. The *Row* and *Column* factors define the row and column positions within the field (rather than within each replicate).

To analyze the response variable, *yield*, we need to identify the two basic components of the experiment: the treatment structure and the blocking (or design) structure. The treatment structure consists of the set of treatments, or treatment combinations, selected to study or to compare. In our example, there is one treatment factor with 25 levels, *Variety* (i.e. the 25 different varieties of barley). The blocking structure of replicates (*Rep*), lattice rows within replicates (*Rep:RowRep*), and lattice columns within replicates (*Rep:ColRep*) reflects the balanced lattice square design. In a mixed model analysis, the treatment factors are (usually) fitted as fixed effects and the blocking factors as random.

The balanced lattice square model is fitted in ASReml-R4 using the following code:

```
> lattice.asr <- asreml(fixed = yield ~ Variety,
random = ~ Rep + Rep:RowRep + Rep:ColRep,
data=data1)
```

The REML log-likelihood is -707.786.

The model’s BIC is:

The estimated variance components are:

The table above contains the estimated variance components for all terms in the random model. The variance component measures the inherent variability of the term, over and above the variability of the sub-units of which it is composed. The variance components for *Rep*, *Rep:RowRep* and *Rep:ColRep* are estimated as 4263, 15596, and 14813, respectively. As is typical, the largest unit (replicate) is more variable than its sub-units (lattice rows and columns within replicates). The *"units!R"* component is the residual variance.

By default, fixed effects in ASReml-R4 are tested using sequential Wald tests:

In this example, there are two terms in the summary table: the overall mean, (*Intercept*), and *Variety*. As the tests are sequential, the effect of the *Variety* is assessed by calculating the change in sums of squares between the two models (*Intercept*)+*Variety* and (*Intercept*). The p-value (Pr(Chisq)) of < 2.2 x 10-16 indicates that *Variety* is a highly significant.

The predicted means for the *Variety* can be obtained using the predict() function. The standard error of the difference between any pair of variety means is 62. Note: all variety means have the same standard error as the design is balanced.

Note: the same analysis is obtained when the random model is redefined as replicates (*Rep*), rows within replicates (*Rep:Row*) and columns within replicates (*Rep:Column*).

As the plots are laid out in a grid, the data can also be analyzed using a spatial model. We’ll illustrate spatial analysis by fitting a model with a separable first order autoregressive process in the field row (*Row*) and field column (*Column*) directions. This is often a useful model to start the spatial modeling process.

The separable first order autoregressive spatial model is fitted in ASReml-R4 using the following code:

```
> spatial.asr <- asreml(fixed = yield ~ Variety,
residual = ~ar1(Row):ar1(Column),
data = data1)
```

The BIC for this spatial model is:

The estimated variance components and sequential Wald tests are:

The residual variance is 38713, the estimated row correlation is 0.458, and the estimated column correlation is 0.684. As for the balanced lattice square model, there is strong evidence of a *Variety* effect (p-value < 2.2 x 10-16).

A log-likelihood ratio test cannot be used to compare the balanced lattice square model with the spatial models, as the variance models are not nested. However, the two models can be compared using BIC. As the spatial model has a smaller BIC (1415) than the balanced lattice square model (1435), of the two models explored in this blog, it is chosen as the preferred model. However, selecting the optimal spatial model can be difficult. The current spatial model can be extended by including measurement error (or nugget effect) or revised by selecting a different variance model for the spatial effects.

Butler, D.G., Cullis, B.R., Gilmour, A. R., Gogel, B.G. and Thompson, R. (2017). *ASReml-R Reference Manual Version 4.* VSN International Ltd, Hemel Hempstead, HP2 4TP UK.

Gilmour, A.R., Thompson, R. & Cullis, B.R. (1995). *Average Information REML, an efficient algorithm for variance parameter estimation in linear mixed models*. Biometrics, 51, 1440-1450.

Copyright © 2000-2022 VSN International Ltd. | Privacy Policy | EULA | Terms & Conditions | Quality Policy | Sitemap