{"id":4029,"date":"2026-04-03T08:25:08","date_gmt":"2026-04-03T12:25:08","guid":{"rendered":"http:\/\/wordpress-473092-1485251.cloudwaysapps.com\/?p=4029"},"modified":"2026-04-03T08:25:08","modified_gmt":"2026-04-03T12:25:08","slug":"hierarchical-moderated-multiple-regression-analysis-in-r","status":"publish","type":"post","link":"https:\/\/www.data-mania.com\/blog\/hierarchical-moderated-multiple-regression-analysis-in-r\/","title":{"rendered":"Hierarchical Moderated Multiple Regression in R (Step-by-Step Demo)"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"4029\" class=\"elementor elementor-4029\" data-elementor-post-type=\"post\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-1ca68f7f elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"1ca68f7f\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-4d74799b\" data-id=\"4d74799b\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-42ccd869 elementor-widget elementor-widget-text-editor\" data-id=\"42ccd869\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>In this tutorial, you\u2019ll learn how to perform <strong data-start=\"839\" data-end=\"885\">Hierarchical Moderated Multiple Regression<\/strong> in R, a technique that helps uncover how one variable changes the relationship between others in a dataset.<\/p><p><span style=\"font-weight: 400;\">Moderator models are often used to examine when an independent variable influences a dependent variable. More specifically, <strong>moderators are used to identify factors that change the relationship between independent (X) and dependent (Y) variables<\/strong>. In this article, I explain how moderation in regression works, and then demonstrate how to do a <strong data-start=\"2057\" data-end=\"2103\">Hierarchical Moderated Multiple Regression<\/strong> in R.<\/span><\/p><p><img fetchpriority=\"high\" decoding=\"async\" src=\"http:\/\/data-mania.com\/blog\/wp-content\/uploads\/2018\/03\/A-Demo-of-Hierarchical-Moderated-Multiple-Regression-Analysis-in-R-1024x1024.png\" alt=\"A Demo of Hierarchical, Moderated, Multiple Regression Analysis in R\" width=\"500\" height=\"500\" data-pin-nopin=\"true\" \/><\/p><p>Pro-tip: Check out our new article on <a href=\"https:\/\/www.data-mania.com\/blog\/growth-forecasting-hierarchical-models\/\">how hierarchical modeling informs growth forecasting in marketing here<\/a>.<\/p><h3 style=\"font-size: 1.6em; letter-spacing: 1px;\"><span style=\"font-weight: 400;\">Understanding Hierarchical Moderated Multiple Regression in R<\/span><\/h3><p><span style=\"font-weight: 400;\">Hierarchical, moderated, multiple <a href=\"http:\/\/data-mania.com\/blog\/logistic-regression-example-in-python\/\">regression analysis<\/a> in R can get pretty complicated so let&#8217;s start at the very beginning. Hierarchical moderated multiple regression extends traditional regression by testing interaction effects across levels, allowing you to see how moderators influence variable relationships. Let us have a look at a generic linear regression model:<\/span><\/p><p style=\"text-align: center;\"><strong><i>Y<\/i>\u2004=\u2004<i>\u03b2<\/i>0\u2005+\u2005<i>\u03b2<\/i>1<i>X<\/i>\u2005+\u2005<i>\u03f5<\/i><\/strong><\/p><p><span style=\"font-weight: 400;\">Y is the dependent variable whereas the variable X is independent i.e. the regression model tries to explain the causality between the two variables. The above equation has a single independent variable.<\/span><\/p><h2><span style=\"font-weight: 400;\">So, what is moderation analysis? <\/span><\/h2><p><span style=\"font-weight: 400;\"><img decoding=\"async\" data-src=\"http:\/\/data-mania.com\/blog\/wp-content\/uploads\/2018\/03\/how-to-do-hierarchical-moderated-multiple-regression-analysis-in-R-683x1024.png\" alt=\"how to do hierarchical, moderated, multiple regression analysis in R\" width=\"300\" height=\"450\" data-pin-title=\"A Demo of Hierarchical, Moderated, Multiple Regression Analysis in R\" data-pin-description=\"Moderation in regression? \ud83e\udd14 In this post, I'll explain how moderation in regression works. Do you want to know how to do: \u27a1\ufe0f hierarchical \u27a1\ufe0f moderated \u27a1\ufe0f multiple regression analysis in R? | READ THIS: http:\/\/data-mania.com\/blog\/hierarchical-moderated-multiple-regression-analysis-in-r\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" style=\"--smush-placeholder-width: 300px; --smush-placeholder-aspect-ratio: 300\/450;\" \/><\/span><\/p><p><span style=\"font-weight: 400;\">Moderator (Z) models are often used to examine when an independent variable influences a dependent variable. That is, moderated models are used to identify factors that change the relationship between independent (X) and dependent (Y) variables. A moderator variable (Z) will enhance a regression model if the relationship between the independent variable (X) and dependent variable (Y) varies as a function of Z.<\/span><\/p><h2><span style=\"font-weight: 400;\">How does a moderator affect a regression model? <\/span><\/h2><p><span style=\"font-weight: 400;\">Let\u2019s look at it from two different perspectives. First, l<\/span><span style=\"font-weight: 400;\">ooking at it from an experimental research perspective:<\/span><\/p><ul><li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">The manipulation of X causes change in Y.<\/span><\/li><li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">A moderator variable (Z) implies that the effect of the X on the Y is <\/span><b>NOT<\/b><span style=\"font-weight: 400;\"> consistent across the distribution of Z.<\/span><\/li><\/ul><p><span style=\"font-weight: 400;\">Second, looking at it from a correlational perspective:<\/span><\/p><ul><li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Assume a correlation between variable X and variable Y.<\/span><\/li><li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">A moderator variable (Z) implies that the correlation between X and Y is <\/span><b>NOT<\/b><span style=\"font-weight: 400;\"> consistent across the distribution of Z.<\/span><\/li><\/ul><p>Now before doing a Hierarchical Moderated Multiple Regression in R, you must always be sure to check whether your data satisfies the model assumptions!<\/p><h2>Checking the assumptions<\/h2><p><span style=\"font-weight: 400;\">There are a couple of assumptions that the data has to follow before the moderation analysis is done:<\/span><\/p><ul><li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">The dependent variable (Y) should be measured on a continuous scale (i.e., it should be an interval or ratio variable).<\/span><\/li><li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">The data must have one independent variable (X), which is either continuous (i.e., an interval or ratio variable) or categorical (i.e., nominal or quantitative variable) and one moderator variable (M).<\/span><\/li><li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">The residuals must not be autocorrelated. This can be checked using the Durbin-Watson test in R.<\/span><\/li><li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">This goes without saying, there needs to be a linear relationship between the dependent variable (Y) and the independent variable (X). There are a number of ways to check for linear relationships, like creating a scatterplot.<\/span><\/li><li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">The data needs to show homoscedasticity. This assumption means that the variance around the regression line is the \u00a0somewhat same for all combinations of independent (X) and moderator (M) variables.<\/span><\/li><li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">The data must not show multicollinearity within the independent variables (X). This usually occurs when two or more independent variables that are highly correlated with each other. \u00a0This can be visually interpreted by plotting a heatmap. <\/span><\/li><li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">The data ideally should not have any significant outliers, highly influential points or many NULL values. The highly influential points can be detected by using the studentized residuals.<\/span><\/li><li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">The last assumption is to check \u00a0if the the residual errors are approximately normally distributed. <\/span><\/li><\/ul><h3 style=\"font-size: 1.6em; letter-spacing: 1px;\"><span style=\"font-weight: 400;\">Demonstrating hierarchical, moderated, multiple regression analysis in R<\/span><\/h3><p>Now that we know what moderation is, let us start with a demonstration of how to do hierarchical, moderated, multiple regression analysis in R<\/p><pre>&gt; ## Reading in the csv file\n&gt; dat &lt;- read.csv(file.choose(), h=T)\n<\/pre><p>Since the data is loaded into the R environment. I\u2019ll talk about the data a bit. The data is based on the idea of stereotype threat. A couple of students are set up for an IQ test. When the students come up to take the test, they are given threats. These are implicit and explicit threats, such as \u201cwomen usually perform worse than men in this test\u201d. This, in turn, tends to affect the performance of the women candidates.<\/p><p>Here, the independent variable (X) is the experimental manipulation (threat) and the dependent variable (Y) is the IQ test score. The variable working memory capacity (wm) is the moderator. We will investigate how the threat affects the IQ test scores with the idea that maybe working memory (wm) has an effect on this relation. (i.e. to see if any of the participants who have a strong working memory are not impacted by the stereotype threat). Therefore, the moderator might say that the stereotype threat may work on some people and not work on some others.<\/p><p>The three threat categories are:<\/p><ol><li>Explicit threat<\/li><li>Implicit threat<\/li><li>No threat (control)<\/li><\/ol><p>Each group consists of 50 students. Let\u2019s look at the structure of the data. The data:<\/p><pre>&gt; str(dat)\n'data.frame':\t150 obs. of  7 variables:\n $ subject    : int  1 2 3 4 5 6 7 8 9 10 ...\n $ condition  : Factor w\/ 3 levels \"control\",\"threat1\",..: 1 1 1 1 1 1 1 1 1 \n $ iq         : int  134 121 86 74 80 105 100 121 138 104 ...\n $ wm         : int  91 145 118 105 96 133 99 97 96 105 ...\n $ WM.centered: num  -8.08 45.92 18.92 5.92 -3.08 ...\n $ d1         : int  0 0 0 0 0 0 0 0 0 0 ...\n $ d2         : int  0 0 0 0 0 0 0 0 0 0 ..\n<\/pre><p>Looking at the structure of the data frame&#8230; The condition variable is categorical with three levels as already discussed. Since there are three categorical variables, we have to create dummy variables of n-1. Where n is the number of categories. So d1 and d2 are the dummy encoded variables. When d1 and d2 is 0, the condition is control. When d1 is 1 the condition is threat1. When d2 is 1 the condition is threat2.<\/p><pre>&gt; head(dat)\nsubject condition iq wm WM.centered d1 d2\n1 1 control 134 91 -8.08 0 0\n2 2 control 121 145 45.92 0 0\n3 3 control 86 118 18.92 0 0\n4 4 control 74 105 5.92 0 0\n5 5 control 80 96 -3.08 0 0\n6 6 control 105 133 33.92 0 0\n<\/pre><p>Now that we know how the data looks like, I\u2019m going plot a boxplot with the IQ and the test condition.<\/p><pre>&gt; ggplot (dat, aes (condition, iq)) + geom_boxplot()\n<\/pre><p><img decoding=\"async\" data-src=\"http:\/\/data-mania.com\/blog\/wp-content\/uploads\/2018\/03\/image1.png\" alt=\"\" width=\"729\" height=\"524\" data-pin-nopin=\"true\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" style=\"--smush-placeholder-width: 729px; --smush-placeholder-aspect-ratio: 729\/524;\" \/><\/p><p>Looking at the three groups in your boxplot. It is quite noticeable that the IQ score decreases when there is a threat and that also the severity of the threat affects the IQ scores a little bit. i.e intrinsic vs extrinsic threat. So it seems like the presence and the severity of a threat affects the IQ scores in a negative way. The presence of threat decreases the IQ scores by a large margin.<\/p><p>Also by plotting a scatter plot:<\/p><pre>&gt; ggplot (dat, aes (wm, iq, color = condition)) + geom_point()\n<\/pre><p style=\"text-align: center;\"><img decoding=\"async\" data-src=\"http:\/\/data-mania.com\/blog\/wp-content\/uploads\/2018\/03\/image2.png\" alt=\"\" width=\"709\" height=\"524\" data-pin-nopin=\"true\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" style=\"--smush-placeholder-width: 709px; --smush-placeholder-aspect-ratio: 709\/524;\" \/><\/p><p>Looking at the scatter plot, there is a clear distinction between the control cluster and the two threat cluster. As seen from the box plot, the scatter plot also shows that people who took the exam in the control condition had a better score on the IQ test than the other two groups.<\/p><p>Since this has been established, getting some correlation values will help with this problem. The correlation values have to be computed for each threat group.<\/p><pre>&gt; # Make the subset for the group condition 'control'\n&gt; library(dplyr)\n&gt; mod_control &lt;- dat %&gt;% subset(dat$condition == 'control')\n<\/pre><pre>&gt; # Make the subset for the group condition 'threat1'\n&gt; mod_threat1 &lt;- dat %&gt;% subset(dat$condition == 'threat1')\n<\/pre><pre>&gt; # Make the subset for the group condition = 'threat2'\n&gt; mod_threat2 &lt;- dat %&gt;% subset(dat$condition == 'threat2')\n<\/pre><pre>&gt; # Calculate the correlations\n&gt; cor(mod_control$iq, mod_control$wm, method = 'pearson')\n[1] 0.1079827\n<\/pre><pre>&gt; cor(mod_threat1$iq, mod_threat1$wm, method = 'pearson')\n[1] 0.7231095<\/pre><pre>&gt; cor(mod_threat2$iq, mod_threat2$wm, method = 'pearson')\n[1] 0.6772917\n<\/pre><p>There is a really strong correlation between IQ and WMC in the threat conditions but not in the control condition.<br \/>Now to build a model without moderation and a model with moderation. Generally, when both the independent (X) and moderator(Z) are continuous.<\/p><p style=\"text-align: center;\"><strong> Y\u2004=\u2004\u03b20\u2005+\u2005\u03b21X\u2005+\u2005\u03b22Z\u2005+\u2005\u03b23(X\u2005*\u2005Z)+\u03f5<\/strong><\/p><p>With \u03b23 we are testing for a non additive effect. So if \u03b23 is significant there is a moderation effect. This model is not valid when variable X is categorical.<\/p><p>When the independent variable (X) is categorical and the moderator variable (Z) is continuous. The model changes a bit.<\/p><p style=\"text-align: center;\"><strong>Y\u2004=\u2004\u03b20\u2005+\u2005\u03b21(D1)+\u03b22(D2)+\u03b23Z\u2005+\u2005\u03b24(D1\u2005*\u2005Z)+\u03b25(D2\u2005*\u2005Z)+\u03f5<\/strong><\/p><p>With this specific data, the independent variable being the stereotypical threat with three levels. I have already explained about how dummy encoding is done. So D1 and D2 are used for three levels in the model. The product of the dummy codes and WMC is used to look for the moderation effect.<\/p><p>Let\u2019s run the R code for the models.<\/p><pre>&gt; # Model without moderation\n&gt; model_1 &lt;- lm(dat$iq ~ dat$wm + mod$d1 + mod$d2) &gt; Getting the summary of model_1\n&gt; summary(model_1)\n\nCall:\nlm(formula = dat$iq ~ dat$wm + mod$d1 + mod$d2)\n\nResiduals:\nMin 1Q Median 3Q Max\n-47.339 -7.294 0.744 7.608 42.424\n\nCoefficients:\nEstimate Std. Error t value Pr(&gt;|t|)\n(Intercept) 59.78635 7.14360 8.369 4.30e-14 ***\ndat$wm 0.37281 0.06688 5.575 1.16e-07 ***\nmod$d1 -45.20552 2.94638 -15.343 &lt; 2e-16 ***\nmod$d2 -46.90735 2.99218 -15.677 &lt; 2e-16 ***\n---\nSignif. codes: 0 \u2018***\u2019 0.001 \u2018**\u2019 0.01 \u2018*\u2019 0.05 \u2018.\u2019 0.1 \u2018 \u2019 1\n\nResidual standard error: 14.72 on 146 degrees of freedom\nMultiple R-squared: 0.7246, Adjusted R-squared: 0.719\nF-statistic: 128.1 on 3 and 146 DF, p-value: &lt; 2.2e-16 &gt; \n<\/pre><pre># Create new predictor variables for testing moderation (product of the working memory and the threat condition)\n&gt; wm_d1 &lt;- dat$wm * dat$d1 &gt; wm_d2 &lt;- dat$wm * dat$d2 &gt; # Model with moderation\n&gt; model_2 &lt;- lm(dat$iq ~ dat$wm + dat$d1 + dat$d2 + wm_d1 + wm_d2) &gt; Getting the summary of model_2\n&gt; summary(model_2)\n\nCall:\nlm(formula = dat$iq ~ dat$wm + dat$d1 + dat$d2 + wm_d1 + wm_d2)\n\nResiduals:\nMin 1Q Median 3Q Max\n-50.414 -7.181 0.420 8.196 40.864\n\nCoefficients:\nEstimate Std. Error t value Pr(&gt;|t|)\n(Intercept) 85.5851 11.3576 7.535 4.95e-12 ***\ndat$wm 0.1203 0.1094 1.100 0.27303\ndat$d1 -93.0952 16.8573 -5.523 1.52e-07 ***\ndat$d2 -79.8970 15.4772 -5.162 7.96e-07 ***\nwm_d1 0.4716 0.1638 2.880 0.00459 **\nwm_d2 0.3288 0.1547 2.125 0.03529 *\n---\nSignif. codes: 0 \u2018***\u2019 0.001 \u2018**\u2019 0.01 \u2018*\u2019 0.05 \u2018.\u2019 0.1 \u2018 \u2019 1\n\nResidual standard error: 14.38 on 144 degrees of freedom\nMultiple R-squared: 0.7409, Adjusted R-squared: 0.7319\nF-statistic: 82.35 on 5 and 144 DF, p-value: &lt; 2.2e-16 All variables have a significant effect on the IQ scores, because all p-values are significantly small. The effect of stereotype threat is strongly negative. The effect of working memory capacity is slightly positive. From the model with moderation it can be seen that the moderator variables wm_d1 and wm_d2 are significant so there is indeed some moderation effect seen in the data. Since both the models are ready, we have to compare them. using ANOVA is good way to compare models. \n<\/pre><pre>&gt; # Compare model_1 and model_2 with the help of the ANOVA function\n&gt; anova(model_1, model_2)\nAnalysis of Variance Table\n\nModel 1: dat$iq ~ dat$wm + mod$d1 + mod$d2\nModel 2: dat$iq ~ dat$wm + dat$d1 + dat$d2 + wm_d1 + wm_d2\nRes.Df RSS Df Sum of Sq F Pr(&gt;F)\n1 146 31655\n2 144 29784 2 1871.3 4.5238 0.01243 *\n---\nSignif. codes: 0 \u2018***\u2019 0.001 \u2018**\u2019 0.01 \u2018*\u2019 0.05 \u2018.\u2019 0.1 \u2018 \u2019 1\n<\/pre><p>The p-value indicates that the null hypothesis is rejected. This means that there is a significant difference between the two models, so the effect of the moderator is significant. This tells us that:<\/p><ul><li>People with high WMC were not affected by the stereotypical threat.<\/li><li>Whereas the people with low WMC scores were affected by the stereotypical threat and scored low on the IQ test.<\/li><\/ul><p>Plotting the scatter plot along with the regression line. The first plot is for the first order or primary effects of WMC on IQ<\/p><pre>&gt; # Illustration of the primary effects of WMC on IQ\n&gt; ggplot(dat, aes(wm, iq)) + geom_smooth(method = 'lm', color = 'brown') +\n+ geom_point(aes(color = condition))\n<\/pre><p style=\"text-align: center;\"><img decoding=\"async\" data-src=\"http:\/\/data-mania.com\/blog\/wp-content\/uploads\/2018\/03\/image3.png\" alt=\"\" width=\"709\" height=\"524\" data-pin-nopin=\"true\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" style=\"--smush-placeholder-width: 709px; --smush-placeholder-aspect-ratio: 709\/524;\" \/><\/p><p>The second scatter plot illustrates the moderation effect of WMC on IQ:<\/p><pre>&gt; # Illustration of the moderation effect of WMC on IQ\n&gt; ggplot(dat, aes(wm, iq)) + geom_smooth(aes(group = condition), method = 'lm', se = T, color = 'brown') + geom_point(aes(color = condition))\n<\/pre><p><img decoding=\"async\" data-src=\"http:\/\/data-mania.com\/blog\/wp-content\/uploads\/2018\/03\/image4.png\" alt=\"\" width=\"709\" height=\"524\" data-pin-nopin=\"true\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" style=\"--smush-placeholder-width: 709px; --smush-placeholder-aspect-ratio: 709\/524;\" \/><\/p><p>We can clearly see a change in slopes, so this indicates moderation.<\/p><p>Mastering <strong data-start=\"2057\" data-end=\"2103\">Hierarchical Moderated Multiple Regression<\/strong> in R equips data scientists with deeper insight into how moderating variables shape outcomes across models.<strong>\u00a0In what ways might you consider applying this analytical method in your own work?\u00a0<\/strong><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-e50ff37 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"e50ff37\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-100af13\" data-id=\"100af13\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-04fcd1a elementor-widget elementor-widget-spacer\" data-id=\"04fcd1a\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"spacer.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-spacer\">\n\t\t\t<div class=\"elementor-spacer-inner\"><\/div>\n\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-3e0a945 elementor-widget elementor-widget-text-editor\" data-id=\"3e0a945\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<h3>Author Bio:<\/h3><p>This article was contributed by <a href=\"https:\/\/www.perceptive-analytics.com\/data-analytics-companies\/\" target=\"_blank\" rel=\"noopener\">Perceptive Analytics<\/a>. Rohit Mattah, Chaitanya Sagar, Prudhvi Potuganti and Saneesh Veetil contributed to this article. Perceptive Analytics provides data analytics, data visualization, business intelligence and reporting services to e-commerce, retail, healthcare and pharmaceutical industries. Our client roster includes Fortune 500 and NYSE listed companies in the USA and India.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>Learn how to perform hierarchical, moderated, multiple regression analysis in R with step-by-step code, interpretation, and visual examples.<\/p>\n","protected":false},"author":4,"featured_media":9467,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":"","_links_to":"","_links_to_target":""},"categories":[582],"tags":[49,326,327,328,329,146],"class_list":["post-4029","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-startups","tag-data-science","tag-hierarchical","tag-moderated","tag-multiple-regression-analysis-in-r","tag-r-programming","tag-statistics"],"_links":{"self":[{"href":"https:\/\/www.data-mania.com\/blog\/wp-json\/wp\/v2\/posts\/4029","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.data-mania.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.data-mania.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.data-mania.com\/blog\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.data-mania.com\/blog\/wp-json\/wp\/v2\/comments?post=4029"}],"version-history":[{"count":9,"href":"https:\/\/www.data-mania.com\/blog\/wp-json\/wp\/v2\/posts\/4029\/revisions"}],"predecessor-version":[{"id":20199,"href":"https:\/\/www.data-mania.com\/blog\/wp-json\/wp\/v2\/posts\/4029\/revisions\/20199"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.data-mania.com\/blog\/wp-json\/wp\/v2\/media\/9467"}],"wp:attachment":[{"href":"https:\/\/www.data-mania.com\/blog\/wp-json\/wp\/v2\/media?parent=4029"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.data-mania.com\/blog\/wp-json\/wp\/v2\/categories?post=4029"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.data-mania.com\/blog\/wp-json\/wp\/v2\/tags?post=4029"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}