cNORM (A. Lenhard, Lenhard & Gary, 2018) is an R package that generates continuous test norms for psychometric and biometric data and analyzes the associated model fit. Originally, cNorm exclusively used an approach that makes no assumptions about the specific distribution of the raw data (A. Lenhard, Lenhard, Suggate & Segerer, 2016). Since version 3.2 (2024), however, the package also offers the option of parametric modeling using the beta-binomial distribution.
cNORM was developed specifically for achievement tests (e.g. vocabulary development: A. Lenhard, Lenhard, Segerer & Suggate, 2015; written language acquisition: W. Lenhard, Lenhard & Schneider, 2017). However, the package can be used wherever mental (e.g. reaction time), physical (e.g. body weight) or other test scores depend on continuous (e.g. age, duration of schooling) or discrete explanatory variables (e.g. sex, test form). In addition, the package can also be used for “conventional” norming based on individual groups, i.e. without including explanatory variables.
The package estimates percentiles as a function of the explanatory variable. This is done either parametrically on the basis of the beta-binomial distribution or distribution-free using Taylor polynomials. Mathematical modeling of the data using continuous variables such as age has the following advantages:
In this vignette, we will demonstrate the necessary steps for the application of the R package with real human performance data, namely, with the normative sample of the sentence comprehension subtest of ELFE 1-6, a reading comprehension test in German language (W. Lenhard & Schneider, 2006) and with the German adaptation of the Peabody Picture Vocabulary Test 4 (A. Lenhard et al., 2015)
The rationale of the approach is to rank the results in the different age cohorts (= age, a) or continuously with a sliding window and thus to determine the observed norm scores (= location, l). Afterwards, powers of the age specific location and of the age are computed, as well as all linear interactions. Thus, we model the raw score r as a function of the powers of location l and age a and their interactions by a Taylor polynomial: $$f(r) = \sum_{k=0}^K \sum_{t=0}^T \beta_{k,t} \cdot l^k \cdot a^t$$
Where:
Finally, the data is fitted by a hyperplane via multiple regression and the most relevant terms are identified:
The ‘cnorm’ method combines most of the steps in one go. The example in a nutshell already suffices for establishing norm scores. It conducts the ranking, the computation of powers and the modeling. A detailed explanation of the distinct steps follows afterwards.
## Basic example code for modeling the sample dataset
library(cNORM)
# Start the graphical user interface (needs shiny installed)
# The GUI includes the most important functions. For specific cases,
# please use cNORM on the console.
cNORM.GUI()
# Using the syntax on the console: The function 'cnorm' performs
# all steps automatically. Please specify the raw score and the
# grouping variable. The resulting object contains the ranked data
# via object$data and the model via object$model.
cnorm.elfe <- cnorm(raw = elfe$raw, group = elfe$group)
# Plot different indicators of model fit depending on the number of
# predictors
plot(cnorm.elfe, "subset", type=0) # plot R2
plot(cnorm.elfe, "subset", type=3) # plot MSE
# NOTE! At this point, you usually select a good fitting model and rerun
# the process with a fixed number of terms, e. g. 4. Avoid models
# with a high number of terms:
cnorm.elfe <- cnorm(raw = elfe$raw, group = elfe$group, terms = 4)
# Powers of age can be specified via the parameter 't'.
# Cubic modeling is usually sufficient, i.e., t = 3.
# In contrast, 'k' specifies the power of the person location.
# This parameter should be somewhat higher, e.g., k = 5.
cnorm.elfe <- cnorm(raw = elfe$raw, group = elfe$group, k = 5, t = 3)
# Visual inspection of the percentile curves of the fitted model
plot(cnorm.elfe, "percentiles")
# Visual inspection of the observed and fitted raw and norm scores
plot(cnorm.elfe, "norm")
plot(cnorm.elfe, "raw")
# In order to compare different models, generate a series of percentile
# plots with an ascending number of predictors, in this example between
# 5 and 14 predictors.
plot(cnorm.elfe, "series", start=5, end=14)
# Cross validation in order to choose appropriate number of terms
# with 80% of the data for training and 20% for validation. Due to
# the time consumption, the maximum number of terms is limited to 10
# in this example with 3 repetitions of the cross validation.
cnorm.cv(cnorm.elfe$data, max=10, repetitions=3)
# Cross validation with prespecified terms of an already
# existing model
cnorm.cv(cnorm.elfe, repetitions=3)
# Print norm table (in this case: 0, 3 or 6 months at grade level 3)
# (Note: The data is coded such that 3.0 represents the beginning and
# 3.5 the middle of the third school year)
normTable(c(3, 3.25, 3.5), cnorm.elfe)
Conventional norming per age group
library(cNORM)
# Application of cNORM for the generation of conventional norms
# for a specific age group (in this case age group 3):
data <- elfe[elfe$group == 3,]
cnorm(raw=data$raw)
In the following, the single steps in detail:
The starting point for standardization should always be a representative sample. Establishing representativeness is one of the most difficult tasks of test construction and must therefore be carried out with appropriate care. First of all, it is important to identify those variables that systematically covary with the variable to be measured. In the case of school achievement and intelligence tests, these are, for example, educational background of the parents, the federal state, the socio-economic background, etc. Caution: Increasing the sample size is only beneficial for the quality of the standardization if the covariates do not remain systematically distorted. For example, it would be useless or even counterproductive to increase the size of the sample if the sample was only collected from a single type of school or only in a single region. One advantage of continuous norming is the generally low sample size required. One way of achieving representativeness is therefore to delete as many randomly selected cases from overrepresented strata as necessary, until the individual strata are represented with the required percentage in the overall sample. However, this means that laboriously collected data is lost again.
If representativeness cannot be achieved by removing cases, a second option is to weight the data using Iterative Proportional Fitting (Raking). In simulation studies (Gary et al., 2023, 204), we were able to show that weighting usually leads to more precise norm scores. However, we have so far only conducted these simulation studies using the distribution-free continuous norming method implemented in cNORM. Problems with weighting only arose when the variance in the standardization sample differed greatly from the actual variance in the reference population. Therefore, when applying weighting, make sure that no excessive deviations from representativeness must be compensated for and that subgroups whose average test scores deviate relatively strongly from the population mean are already sufficiently taken into account during data collection.
For conducting weighting, please consult the vignette on ‘WeightedRegression’.
The appropriate sample size cannot be quantified in a definitive way, but depends on how well the test (or scale) must differentiate in the extreme sections of the norm scale. In many countries, for example, it is common (although not always reasonable) to differentiate between IQ < 70 and IQ > 70 to diagnose developmental disabilities and to choose the appropriate school type or educational track. An IQ test used for school placement must therefore be able to identify a deviation of 2 SDs or more from the population mean as reliably as possible. If, on the other hand, the diagnosis of a reading/spelling disorder is required, a deviation of 1.5 SD from the population mean is generally sufficient for the diagnosis according to DSM-5. As a rule of thumb for determining the ideal sample size, it can be stated that the measurement error caused by the norming procedure is particularly high in those performance areas that are only represented with low probability in the norming sample. (This does not only apply to continuous norming, but to all norming methods.) For example, in a representative random sample of N = 100, the probability that there is no single child with an IQ below 70 is about 10%. For a sample size of N = 200, this probability decreases to 1 %. Doubling the sample size thus notably improves the reliability of the norm score in ranges markedly deviating from the scale mean.
Since continuous norming models are always based on the entire sample, the statistical power of the norming procedure increases for each individual age. As a result, the required size of the norm sample can be substantially reduced. With a sample size of n = 100 per cohort or grade level, the norms already achieve a goodness of fit that is only achieved with conventional norming with sample sizes of n = 400 and more (W. Lenhard & Lenhard, 2021). Thus, not only do the norm scores become more precise, but the standardization projects become more cost-effective overall.
Once a representative sample of sufficient size has been created, the data must be loaded into the R workspace. cNORM excludes cases with missings in relevant variables. For continuous norming, in addition to the variable with the raw scores, an explanatory variable (e.g. age or duration of schooling) is required, which can be represented as a discrete grouping variable or as a continuous variable. Please ensure that the discrete grouping variable is a numerical variable with the group mean of the corresponding continuous variable being used as the variable’s value, e.g. 10.5 for all children aged between 10 and 11. If only a continuous variable is initially available when applying the distribution-free method (i.e., modeling with Taylor polynomials), then this variable must be recoded into a discrete grouping variable. However, the method is relatively robust to changes in the granularity of the group subdivision. For example, the modeling result barely depends on whether the sample is devided into age brackets of 6 months or 12 months (see A. Lenhard, Lenhard, Suggate, & Segerer, 2016). The more the course of the raw scores across the explanatory variable deviates from a linear development, the finer the groups should be formed. In parametric modeling with the beta-binomial distribution, an additional group variable is generally unnecessary.
For recoding a continuous explanatory variable into a group variable, the following function can be used:
# Creates a grouping variable for the variable 'age'
# of the ppvt data set. In this example, 12 equidistant
# subgroups are generated.
group <- getGroups(ppvt$age, 12)
In the cNORM package, distribution-free modeling of norm scores is based on estimating raw scores as a power function of person location l and the explanatory variable a. This approach requires several steps, which internally can be executed by a single function - the ‘cnorm()’ function.
First, the ‘cnorm()’ function estimates preliminary values for the person locations. To this purpose, the raw scores within each group are ranked. Alternatively, a sliding window can be used in conjunction with the continuous explanatory variable. In this case, the width of the sliding window (function parameter ‘width’) must be specified. Subsequently, the ranks are converted to norm scores using inverse normal transformation. The resulting norm scores serve as estimators for the person locations. To compensate for violations of representativeness, weights can be included in this process (see Weighting).
The second internal step is the calculation of powers for l and a. Powers are calculated up to a certain exponent ‘k’. In order to use a different exponent for a than for l, you can also specify the parameter ‘t’ (see mathematical derivation), which is beneficial in most cases. If neither k nor t is specified, the ‘cNORM()’ function uses the values k = 5 and t = 3, which have proven effective in practice. All powers are also multiplied crosswise with each other to capture the interactions of l and a in the subsequent regression. The object finally returned by the ‘cnorm()’ function contains the preprocessed data including the manifest norm scores and all powers and interactions of l and a in ‘model$data’.
In the third internal step, ‘cnorm’ determines a regression function. Following the principle of parsimony, models that achieve the highest Radjusted2 with as few predictors as possible should be selected. The ‘cnorm()’ function uses the ‘regsubset()’ function from the ‘leaps’ package for regression. Two different model selection strategies are possible: You can either specify the minimum value required for Radjusted2. Then the regression function that meets this requirement with the fewest number of predictors is selected. Or you can specify a fixed number of predictors. Then the model that achieves the highest Radjusted2 with this number of predictors is selected. Unfortunately, it is usually not known in advance how many predictors are needed to optimally fit the data. How to find the best possible model is explained in the following section Model Selection. To begin with, however, we would first like to explain the basic functionality of cNORM using a simple modeling example. To do this, we will use the ‘elfe’ dataset provided and start with the default setting. Since cNORM version 3.3.0, this setting provides the model with the highest Radjusted2 while avoiding inconsistencies. (What we mean by ‘inconsistency’ is also explained in more detail in the section Model Selection).
library(cNORM)
# Models the 'raw' variable as a function of the discrete 'group' variable
model <- cnorm(raw = elfe$raw, group = elfe$group)
#> Powers of location: k = 5
#> Powers of age: t = 3
#> Multiple R2 between raw score and explanatory variable: R2 = 0.5129
#>
#> Final solution: 10 terms (highest consistent model)
#> R-Square Adj. = 0.992668
#> Final regression model: raw ~ L2 + A3 + L1A1 + L1A2 + L1A3 + L2A1 + L2A2 + L3A1 + L4A3 + L5A3
#> Regression function: raw ~ 9.268833029 + (-0.005896936062*L2) + (-0.3661228589*A3) + (-0.8939380673*L1A1) + (0.2822969807*L1A2) + (-0.002256350426*L1A3) + (0.02269861333*L2A1) + (-0.005157856232*L2A2) + (-5.99057391e-05*L3A1) + (6.480302151e-08*L4A3) + (-3.957287831e-10*L5A3)
#> Raw Score RMSE = 0.60987
#>
#> Use 'printSubset(model)' to get detailed information on the different solutions, 'plotPercentiles(model) to display percentile plot, plotSubset(model)' to inspect model fit.
The model explains more than 99.2% of the data variance but requires a relatively high number of 10 predictors (plus intercept) to do so. The third line (‘Final regression model’) reports which powers and interactions were included in the regression. For example, L2 represents the second power of l, A3 represents the third power of a, and so on. By default, the location l is returned in T-scores (M = 50, SD = 10). However, IQ scores, z-scores, percentiles, or any vector containing mean and standard deviation (e.g., c(10, 3) for Wechsler scaled scores) can be selected instead by specifying the ‘scale’ parameter of the ‘cnorm’ function. Subsequently, the complete regression formula including coefficients is returned.
The returned ‘model’ object contains both the data (model$data) and the regression model (model$model). All information about the model selection can be accessed under ‘model$model$subsets’. The variable selection process is listed step by step in ‘outmat’ and ‘which’. There you can find R2, Radjusted2, Mallow’s Cp, and BIC. The regression coefficients for the selected model (‘model$model$coefficients’) are available, as are the fitted values (‘model$model$fitted.values’) and all other information. A table with the corresponding information can be printed using the following code:
print(model)
#> R2adj BIC CP RSS RMSE DeltaR2adj
#> 1 0.9197525 -3518.209 14746.83265 5736.1349 2.0241638 NA
#> 2 0.9800576 -5461.138 2614.81241 1424.4767 1.0087038 6.030503e-02
#> 3 0.9908199 -6541.039 452.04761 655.2571 0.6841351 1.076237e-02
#> 4 0.9914116 -6628.074 333.95429 612.5836 0.6614830 5.916977e-04
#> 5 0.9915936 -6651.813 298.22070 599.1755 0.6542037 1.819543e-04
#> 6 0.9919912 -6713.408 219.31297 570.4260 0.6383159 3.976084e-04
#> 7 0.9922445 -6752.171 169.41665 551.9854 0.6279134 2.533397e-04
#> 8 0.9925535 -6802.851 108.45630 529.6133 0.6150571 3.089811e-04
#> 9 0.9926317 -6811.395 93.74201 523.6741 0.6115987 7.820923e-05
#> 10 0.9926680 -6812.068 87.43107 520.7209 0.6098717 3.627741e-05
#> 11 0.9927589 -6823.287 70.23231 513.8989 0.6058635 9.084391e-05
#> 12 0.9927927 -6823.616 64.42967 511.1263 0.6042270 3.387453e-05
#> 13 0.9928281 -6824.277 58.33021 508.2483 0.6025234 3.541165e-05
#> 14 0.9928384 -6820.050 57.25115 507.1542 0.6018746 1.027172e-05
#> 15 0.9928841 -6822.769 49.12978 503.5577 0.5997367 4.564866e-05
#> 16 0.9928940 -6818.497 48.12485 502.4899 0.5991005 9.954452e-06
#> 17 0.9929119 -6815.791 45.54800 500.8636 0.5981302 1.787357e-05
#> 18 0.9929725 -6821.589 34.47790 496.2193 0.5953506 6.063981e-05
#> 19 0.9930215 -6825.145 25.74684 492.4062 0.5930588 4.894852e-05
#> 20 0.9930324 -6821.116 24.56708 491.2763 0.5923780 1.096405e-05
#> 21 0.9930473 -6817.875 22.61991 489.8737 0.5915318 1.485024e-05
#> 22 0.9930457 -6811.317 23.94451 489.6337 0.5913868 -1.640457e-06
#> 23 0.9930504 -6806.050 24.00000 488.9428 0.5909694 4.766823e-06
#> F p nr consistent
#> 1 NA NA 1 TRUE
#> 2 4228.4907653 0.000000e+00 2 TRUE
#> 3 1638.7926544 0.000000e+00 3 TRUE
#> 4 97.1778501 0.000000e+00 4 TRUE
#> 5 31.1943874 2.802835e-08 5 TRUE
#> 6 70.2071670 1.110223e-16 6 TRUE
#> 7 46.5037313 1.359068e-11 7 TRUE
#> 8 58.7590888 3.330669e-14 8 TRUE
#> 9 15.7645352 7.540381e-05 9 TRUE
#> 10 7.8774779 5.075334e-03 10 TRUE
#> 11 18.4257222 1.889145e-05 11 FALSE
#> 12 7.5236687 6.167730e-03 12 FALSE
#> 13 7.8484276 5.157214e-03 13 FALSE
#> 14 2.9879117 8.411115e-02 14 FALSE
#> 15 9.8847574 1.701950e-03 15 FALSE
#> 16 2.9387829 8.669992e-02 16 FALSE
#> 17 4.4874090 3.432345e-02 17 FALSE
#> 18 12.9252315 3.355968e-04 18 FALSE
#> 19 10.6865636 1.105753e-03 19 FALSE
#> 20 3.1715487 7.515144e-02 20 FALSE
#> 21 3.9453954 4.719763e-02 21 FALSE
#> 22 0.6749444 4.114753e-01 22 FALSE
#> 23 1.9445051 1.634053e-01 23 FALSE
Mathematically, the regression function represents a hypersurface in three-dimensional space. When R2 is sufficiently high (e.g., R2 > .99), this surface typically models the manifest data very well over wide ranges of the normative sample. However, a Taylor polynomial, as used here, usually has a finite radius of convergence. In practice, this means that at some age or performance ranges the regression function might no longer provide plausible values. The model might, for example, unexpectedly deviate strongly from the manifest data. Such areas are usually best recognized by graphically comparing manifest and modeled data. For this purpose, cNORM provides, among other things, percentile plots.
In the percentile plot, the manifest data are represented as dots, while the modeled percentiles are shown as lines. The raw score range is automatically determined based on the values from the original dataset. However, it can also be explicitly specified using the parameters ‘minRaw’ and ‘maxRaw’.
As can be seen from the percentile plot above, the percentiles of the norming model run relatively smoothly across all levels of the explanatory variable and align well with the manifest data. Small fluctuations between individual groups are eliminated. The uppermost percentile line (PR = 97.5) runs horizontally from the fourth grade onward. However, this does not represent a limitation of the model, but rather a ceiling effect of the test, as the maximum raw score of 28 is reached at this point. Nevertheless, implausible values or model inconsistencies often appear at those points where the test has floor or ceiling effects or where the normative sample is too sparse, that is, they usually occur at the boundaries of the age or performance range of the normative sample, or even beyond. Therefore, when checking the percentile lines, particular attention should be payed to the model’s behavior at these boundaries.
Although the percentile plot suggests a good model fit, the number of predictors (i.e., 10) is relatively high. This high number carries the risk of potential overfitting of the data. The most obvious sign of overfitting is usually wavy (and therefore counter intuitive) percentile lines. Since they are not wavy for the calculated model, there is not much indication of relevant overfitting here. Moreover, if too much emphasis is placed on parsimony, insufficient fit in extreme performance ranges can occur. Therefore, the proposed model with 10 predictors seems to be an adequate option. However, the cNORM package provides methods to find even more parsimonious models, which we will demonstrate in the following.
First, we recommend to perform a visual inspection using percentile plots. To this end, cNORM offers a function that generates a series of percentile plots with an increasing number of predictors.
# Generates a series of percentile plots with increasing number of predictors
plotPercentileSeries(model, start = 1, end = 15)
In this case, a series of percentile plots with 1 to 15 predictors will be generated. The percentile lines begin to intersect from 12 predictors onward in the higher grade levels. This means that a single raw score is mapped on two different norm scores, i.e., the mapping of latent person variables to raw scores is not bijective in these models. Consequently, at least for some raw scores, it would be impossible to determine a definitive norm score when using these models.
There can be various reasons for intersecting percentile lines:
If the power parameter ‘t’ for the explanatory variable a is chosen too high, very wavy percentile lines can occur in addition to crossing ones. For comparison, you will find below the series of percentile plots that is obtained with k = 5 and t = 4.
With 14 predictors, wavy percentile lines can be seen at PR = 2.5, but even with 7 or more predictors, the percentile lines no longer increase as monotonically as desired.
Let’s return to our model series with k = 5 and t = 3, which evidently produces better results than k = 5 and t = 4. Our goal was to potentially find models that provide sufficiently good modeling results with fewer than 10 predictors. To this end, you can also examine how the addition of predictors affects Radjusted2. For this purpose, use the following command:
If ‘type = 1’ is set in the ‘plot()’ function instead of ‘type = 0’, Mallow’s Cp is displayed in logarithmic form. With ‘type = 2’, the BIC (Bayesian Information Criterion) is plotted against Radjusted2.
Fortunately, the scale can be modeled very well, which is evident from the fact that it’s possible to find consistent models that explain more than 99% of the data variance. This value is even achieved here with just three predictors (Radjusted2 = .991). With 8 predictors, 99.3% of the data variance is explained by the according model, only 0.2% more than with three predictors. But even such small increases can sometimes lead to significant improvements for extreme person locations. Beyond 8 predictors, however, a further addition of predictors does not even lead to changes in the per mille range. So all models between 3 and 8 predictors seem to fit well, but are more parsimonious than the model with 10 predictors and can thus be considered for selection, too. Which of these models should ultimately be selected will be determined through the following cross-validation.
In addition to the pure modeling functions, cNORM also contains functions for generating norm tables or retrieving the normal score for a specific raw score and vice versa.
The ‘predictNorm’ function returns the normal score for a specific raw score (e.g., raw = 15) and a specific age (e.g., a = 4.7). The normal scores have to be limited to a minimum and maximum value in order to take into account the limits of model validity.
The ‘predictRaw’ function returns the predicted raw score for a specific normal score (e.g., T = 55) and a specific age (e.g., a = 4.5).
The ‘normTable’ function returns the corresponding raw scores for a specific age (e.g., a = 3) and a pre-specified series of normal scores. The parameter ‘step’ specifies the distance between two normal scores.
normTable(3, model, minRaw = 0, maxRaw = 28, minNorm=30.5, maxNorm=69.5, step = 1)
#> norm raw percentile
#> 1 30.5 4.03 2.6
#> 2 31.5 4.45 3.2
#> 3 32.5 4.88 4.0
#> 4 33.5 5.32 4.9
#> 5 34.5 5.77 6.1
#> 6 35.5 6.23 7.4
#> 7 36.5 6.70 8.9
#> 8 37.5 7.18 10.6
#> 9 38.5 7.68 12.5
#> 10 39.5 8.18 14.7
#> 11 40.5 8.68 17.1
#> 12 41.5 9.20 19.8
#> 13 42.5 9.73 22.7
#> 14 43.5 10.26 25.8
#> 15 44.5 10.80 29.1
#> 16 45.5 11.34 32.6
#> 17 46.5 11.89 36.3
#> 18 47.5 12.45 40.1
#> 19 48.5 13.01 44.0
#> 20 49.5 13.57 48.0
#> 21 50.5 14.14 52.0
#> 22 51.5 14.71 56.0
#> 23 52.5 15.29 59.9
#> 24 53.5 15.87 63.7
#> 25 54.5 16.44 67.4
#> 26 55.5 17.02 70.9
#> 27 56.5 17.60 74.2
#> 28 57.5 18.18 77.3
#> 29 58.5 18.75 80.2
#> 30 59.5 19.32 82.9
#> 31 60.5 19.89 85.3
#> 32 61.5 20.46 87.5
#> 33 62.5 21.02 89.4
#> 34 63.5 21.57 91.1
#> 35 64.5 22.12 92.6
#> 36 65.5 22.66 93.9
#> 37 66.5 23.19 95.1
#> 38 67.5 23.71 96.0
#> 39 68.5 24.22 96.8
#> 40 69.5 24.72 97.4
This function is particularly useful when scales have a large range of raw scores, and consequently, multiple raw scores correspond to a single (rounded) norm score. For example, for a tabulated norm score of T = 40, all integer raw scores that are assigned to norm scores between 39.5 and 40.5 would need to be listed. In the present case, the raw score 8.47 corresponds to T = 39.5, and the raw score 8.99 corresponds to T = 40.5. As a result, no single (integer) raw score would be assigned to a standard score of 40 in the test manual’s table. Furthermore, the ‘normTable()’ funtion is also useful when norm scores for various subtests need to be tabulated in a single table.
The function ‘rawTable’ is similar to ‘normTable’, but reverses the assignment: The normal scores are assigned to a pre-specified series of raw scores at a certain age. This requires an inversion of the regression function, which is determined numerically. Specify reliability and confidence coefficient to automatically calculate confidence intervals:
rawTable(3.5, model, minRaw = 0, maxRaw = 28, minNorm = 25, maxNorm = 75, step = 1, CI = .95, reliability = .89)
#> raw norm percentile lowerCI upperCI lowerCI_PR upperCI_PR
#> 1 0 - 3 25.00 0.6 21.62 33.88 0.2 5.4
#> 5 4 25.98 0.8 22.49 34.76 0.3 6.4
#> 6 5 27.95 1.4 24.25 36.51 0.5 8.9
#> 7 6 29.89 2.2 25.97 38.24 0.8 12.0
#> 8 7 31.81 3.4 27.68 39.94 1.3 15.7
#> 9 8 33.69 5.1 29.36 41.62 1.9 20.1
#> 10 9 35.56 7.4 31.02 43.28 2.9 25.1
#> 11 10 37.41 10.4 32.66 44.93 4.1 30.6
#> 12 11 39.24 14.1 34.29 46.56 5.8 36.5
#> 13 12 41.06 18.6 35.91 48.18 7.9 42.8
#> 14 13 42.87 23.8 37.52 49.79 10.6 49.2
#> 15 14 44.67 29.7 39.12 51.39 13.8 55.5
#> 16 15 46.47 36.2 40.72 52.99 17.7 61.7
#> 17 16 48.26 43.1 42.32 54.58 22.1 67.7
#> 18 17 50.06 50.2 43.92 56.18 27.2 73.2
#> 19 18 51.86 57.4 45.53 57.79 32.7 78.2
#> 20 19 53.68 64.4 47.14 59.41 38.8 82.7
#> 21 20 55.52 70.9 48.78 61.04 45.1 86.5
#> 22 21 57.38 77.0 50.44 62.70 51.8 89.8
#> 23 22 59.29 82.4 52.14 64.40 58.5 92.5
#> 24 23 61.25 87.0 53.88 66.14 65.1 94.7
#> 25 24 63.28 90.8 55.69 67.95 71.5 96.4
#> 26 25 65.41 93.8 57.59 69.85 77.6 97.6
#> 27 26 67.70 96.2 59.62 71.88 83.2 98.6
#> 28 27 70.22 97.8 61.86 74.13 88.2 99.2
#> 29 28 73.17 99.0 64.49 76.75 92.6 99.6
# generate several raw tables
table <- rawTable(c(2.5, 3.5, 4.5), model, minRaw = 0, maxRaw = 28)
#> The raw table generation yielded indications of inconsistent raw score results: 27,28,. Please check the model consistency.
You need these kind of tables if you want to determine the exact percentile or the exact normal score for all occurring raw scores.
In the following figures, the manifest and projected raw scores are compared separately for each (age) group. If the ‘group’ variable is set to ‘FALSE’, the values are plotted over the entire range of explanatory variable (i.e. without group differentiation).
The fit is particularly good if all dots are as close as possible to the angle bisector. However, it must be noted that deviations in the extreme upper, but especially in the extreme lower range of the raw scores often occur because the manifest data in these ranges are associated with large measurement error.
The function corresponds to the plot(“raw”) function, except that in this case, the manifest and projected norm scores are plotted against each other. Please specify the minimum and maximum norm score. In the specific example, T-scores from 25 to 75 are used, covering the range from -2.5 to +2.5 standard deviations around the average score of the reference population.
The ’plot(“density”) function plots the probability density function of the raw scores. This method can be used to visualize the deviation of the test results from the normal distribution.
Finally, we would like to remind mathematically experienced users that it is also possible to perform conventional curve sketching of the regression function. Since the regression equation is a polynomial of the nth degree, the required calculations are not very complicated. You can, for example, determine local extrema, inflection points etc..
In this context, the ‘plot(“derivative”)’ function provides a visual illustration of the first partial derivative of the regression function with respect to the person location. The illustration helps to determine the points at which the derivative crosses zero. At these points, the mapping is no longer bijective. The points therefore indicate the boundaries of the model’s validity.
plot(model, "derivative")
#> The original data for the regression model spanned from age 2 to 5, with a norm score range from 21.93 to 78.07. The raw scores range from 0 to 28. Coefficients from the 1 order derivative function:
#>
#> L1 A1 A2 A3 L1A1
#> -1.179387e-02 -8.939381e-01 2.822970e-01 -2.256350e-03 4.539723e-02
#> L1A2 L2A1 L3A3 L4A3
#> -1.031571e-02 -1.797172e-04 2.592121e-07 -1.978644e-09