Do Video Games Corrupt Children? – A Scientific Treatise
This is a re-print of an article originally posted on another site, included here so that the blog is a complete repository of my written work. The article is reproduced without pagination, formatting, images or editorial changes made on the original site prior to original publication.
First, this article does not contain any humour and is in fact quite dry. Secondly, like it or not, nobody can accurately answer this question yet, but here I am going to present a scientific basis for getting closer to a definitive result.
My aim with this article is to point out the accuracies and flaws of current thinking and explain why these thoughts are accurate or flawed. To do this, one needs some qualification in the subject. I have experience with creating scientific tests and survey material, I am qualified in statistics and although I do not have a human sciences degree, I am somewhat self-taught in psychology including behavioral psychology – however I am in no sense an expert on that subject. I will try to keep this scientific and objective with personal opinion removed, and let you decide for yourselves.
In order to say whether video games corrupt children, we first have to define the meaning of corruption in terms that can be scientifically measured. You can produce no meaningful results without an accurate definition of what you are testing.
Is corruption the act of children becoming more violent and aggressive? If so, what defines aggression? How do you quantify aggression as a number that can be measured? How much change in aggression is needed for it to be deemed meaningful? Does short-term aggression count or does it have to be sustained over a period of time, months or years?
Psychologists have scales for this sort of thing such as the Caprara Irritability Scale and the Boss-Perry Aggression Questionnaire – specific behaviours map to specific numbers – but to the best of my knowledge at no time has the word corruption been narrowly defined in a video game test.
The sample is the group of all participants (children) in the test. The way the sample is selected is extremely important to ensure a fair and balanced result. There are two classic methods:
- Select people based on extremely narrow criteria to test a very specific group, minimising the number of factors involved that might blur the issue. For example: pick all 8-year old children, white males with middle class parents with a specific range of income, all living in the same district, two-parent families only, exclude children with pre-existing physical or mental conditions and so on. Keep each selection as close to identical as possible so that they are all pre-disposed in approximately the same way, to see if exposing them to violent or sexual video games affects the behaviour of individuals in this specific demographic group.
- Diversify as much as possible. Ignore all factors and take children from all different backgrounds and social groups, keeping only very basic factors such as age range similar. In this case, it is extremely important to try to keep the mix as even as possible. If 80% of the sample happens to belong to one-parent families and this happens to be a factor in altering their behaviour when exposed to video games, you are not going to get a balanced result, and you will have no way of knowing whether it was their background that made them aggressive or the games.
Some existing studies on video game corruption have managed to select samples properly, others haven’t.
Why does it matter? Well, when a scientific test is done, we run a statistical analysis on the results to see if there is any correlation between what the subjects were exposed to and their subsequent behaviour. Suppose in a very clear-cut example that 5 boys and 5 girls are selected. 4 of the girls had a history of depression and so did one of the boys. After being exposed to violent video games, all the girls went out and massacred people, but none of the boys did. Were the girls aggressive because they were girls? Or was it because they had a history of depression? Or was it because of some other factor we didn’t think of? There is no way to know; the best solution therefore is to minimise any differences in the sample group.
To know whether video games affect children’s behaviour, you need a baseline set of behaviours to compare to. The way to do this is to take some of the children from the sample (selected the same way as above) and exclude them from the experiment while keeping all other factors as equal as possible. This means you will expose them to the same environment during the experiment, and the same questioning or other behavioral tests afterwards; you will simply not give them the video games to play. In this way the control group and test group have been treated identically in all respects.
Why does a control group matter? Picture this extreme example. 10 kids play Modern Warfare 2 for 10 minutes then they all go out and shoot someone. So video games corrupted them right? What if you put 10 kids in a silent room for 10 minutes and they all go out and shoot someone too? That changes the picture completely – now it looks like video games didn’t have any impact on the children’s behaviour at all. A control group is therefore essential to be able to tell what is really going on, and without one, any test is worthless.
This in my opinion is the single largest stumbling block of all the tests of video games corrupting children performed so far.
Some people seem to think that you can take a dozen kids and draw a meaningful conclusion from their behaviour. This is not true – in any subject matter – and although it seems fairly intuitive common sense anyway, I’m going to explain the scientific basis for why you need a much, much larger number of children to participate.
Let us take the example of a normal coin, and a weighted coin that will usually land on heads. We don’t know which coin is which, and we’d like to do an experiment to determine scientifically – which means beyond a reasonable doubt – which of the coins is biased.
You toss a normal (fair) coin ten times. On average, you expect it to land heads 5 times and tails 5 times. You then toss the weighted coin ten times. It lands heads 10 times. However, in this case, it so happens by random chance that the fair coin also lands heads 10 times. Which coin is which? You still don’t know. Why? Because you haven’t tossed them enough times to be sure.
We all know intuitively that it is perfectly possible to toss a fair coin 10 times and for it to land the same way up every time. It’s not very likely, but it does happen. This is because even though there is always a roughly 50/50 chance of it landing either way up, the heads and tails do not occur evenly. Over the course of many tosses, it will even out and we will see an approximately equal number of heads and tails. If we toss it many millions of times we may even discover that the result is 50.1% vs 49.9% because of imperfections in the coin or the way it has been tossed (and I will address both those issues later as they are important too). But the only way we can observe this is by repeating the coin toss over and over to minimise any influence by random streaks.
Analogizing this to children, 1 million coin tosses can be considered the same as tossing 1 million identical coins once each. One coin represents one child. Heads represents no change in behaviour after playing video games, and tails represents a significant, scientifically measured change. The difference between these two experiments is that it is unlikely the result will be 50-50, and we don’t yet know what the result really is, but otherwise the concept is essentially the same.
Why don’t we know what the result is? Simple: no test has ever been conducted on a large enough sample of children.
Selecting the sample size
This raises a dilemma. How many children do we actually need as a bare minimum to ensure a meaningful result? Amazingly, this can actually be calculated scientifically to an extraordinary degree of accuracy, however in the case of children being corrupted by video games, it is also extraordinarily difficult to acquire the information we need to make this calculation in the first place, leading to a catch-22 situation.
If we know the ‘average’ (or ‘mean’) aggression level of the entire population, and the ‘standard deviation’ of aggression in the population – which is a fancy way of saying roughly how much people’s aggression levels differ at the most from person to person, excluding extreme cases – we can calculate the minimum number of children we would need to test to get a scientifically significant result.
Unfortunately, it is not possible to precisely know these figures, however one way to approximate is to take a large generalized cross-section of the population (selection method 2 above), give them the same aggression test we would give the children after exposing them to video games, assigning each person an aggression level and extrapolating the figures to cover the entire population of the country or world as desired (if the sample is all taken from one country, you can only extrapolate it to cover that country; to cover the world you have to test subjects from a large group of randomly selected countries).
If you are confused, think about exit polls during an election. Simplifying, we ask people at random as they leave the voting booth who they voted for. If we have asked 10,000 people and 40% of them voted for the conservative party, and the population of the electorate is 50 million, then we can guestimate that 50,000,000 * 40% = 20 million people will vote conservative. As the number of people we survey at the exit poll goes up, so the estimation becomes more accurate.
How does this translate to kids? Well, this is analogous to seeing if a particular group of children would vote differently to the general population. You can think of ‘particular group’ as ‘those exposed to video games’ and ‘vote differently’ to ‘become more aggressive’. Once we’ve gathered some data from the general population at the exit poll, we can calculate how many children we’ll need to expose to video games in order to test if they have a significantly different voting response to the population.
(Statisticians: I’m aware this comparison has flaws regarding discrete measurements and uncorrelated sample sets; I’m trying to keep it simple, the principle holds)
In a nutshell, if we don’t test enough children, we can’t draw a statistically significant result from the experiment, and right now, we don’t know what the minimum number of children needed actually is.
It is two weeks before a general election. The TV polls say “Labour 48%, Conservative 51%”. In small print at the bottom it reads “Margin of error: +/-5%”. That small piece of text makes the poll result completely meaningless. Labour could have as much as 53% of the vote, and the Conservatives could have as little as 46% – and vice versa if the poll showed a 51/48 Lab/Con split.
The error rate of an experiment whose results are based on statistics is indirectly inversely proportional to the sample size. In English, that means the more children you test, the more accurate a result you get with less chance of misleading results. Moreover, it also means that if the results of “corruption vs non-corruption” are fairly close, we cannot know for sure what the true result of the test is unless the percentage difference is more than twice the error rate. In a test with a dozen children, the error rate is extremely high because the sample size is not big enough to reasonably emulate the behaviour of the entire child population.
Corruption is not a black-and-white issue. For simplicity, let’s say corruption is the same as aggression for the next example. Pre-exposure aggression will be measured on a scale, and if you draw a graph with aggression on the horizontal axis and number of people on the vertical axis, you should see a bell curve, indicating that the majority of people have some baseline middling amount of aggression, tailing off in each direction, with a small number of outliers having very low or high general aggression levels in their personalities.
Why am I mentioning this? Well, many people seem to have strange ideas on how to define whether a test of video games on kids gives a meaningful result. We have seen recent “research” about how many children out of a small sample pick up a pencil that a psychiatrist deliberately drops after they have played a violent game, and simply adding up the results. If there is no bell curve so-to-speak, the research is frankly not worth the paper it’s written on.
The reason is simple: the bell curve provides an excellent, clean and simple way to test the results which cannot be disputed: when you draw the post-exposure aggression (bell) curve over the pre-exposure curve, has it moved to the right by more than twice the error rate? If so, you have a statistically significant result. If not, you don’t. This is important because it removes any subjectivity from interpreting the results.
Newton is sitting under a tree and an apple falls on his head. He writes down the equation for gravity. 100 scientists come along with apples and let go of them at arm’s length, but they all float up to the sky.
This silly example highlights the dangers of doing a single experiment and taking the results as proof of correctness. And in fact, this example is not nearly as silly as it seems, because in the early 20th century Einstein essentially disproved gravity by demonstrating that what we perceive as gravity is really just the visible perception of the natural curvature of space-time. The point of this is, if you are going to test a theory – like whether children are corrupted by video games – you need to do it multiple times, using different methods, with different people doing the test, and reach a consensus. This is analogous to the problem of the imperfect fair coin and the human error present in the way one person might flip it in the coin example earlier.
If an observer is asked to sit in a room full of children and write down on a report sheet – in her own terms – how aggressive she thinks each child is after a fixed period of observation, it is possible that she will report a significantly different set of numbers to another observer asked to rate the same children at the same time. The observer’s very perception of what aggression is distorts the measurement of accurate data, and makes the results unreliable. All the other factors I’ve covered like the children’s ethnographic and psychiatric backgrounds, the way the samples are selected and so on also limit the applicability of any single study. To mitigate this and reduce the impact of human judgment or error on the results, it is important to repeat the test in different ways and with different people conducting it. So far, a consensus has not been reached.
- American Psychological Association – “Video Games and Aggressive Thoughts, Feelings, and Behavior in the Laboratory and in Life” – http://www.apa.org/pubs/journals/features/psp-784772.pdf
Anderson and Dill describe how the Columbine killers made their video footage of the school massacre emulate a modded version of Doom found on the web site of Eric Harris, one of the offenders. No mention is made of Harris’s other exposures, family environment or whether he and the other offender Dylan Klebold had a history of psychiatric disorders or other affecting variables. The sample size is 2. With no other information available, this cannot be taken as anecdotal evidence that video games corrupt teenagers.
As it turns out, child psychologist Peter Langman studied Harris, Klebold and 8 other students who murdered various individuals after Columbine. Harris was diagnosed with psychopathy, and Klebold was diagnosed with paranoid schizophrenia. Langman drew these conclusions based on 27,000 pages of notes taken on the teenagers after their arrest. For those of you uninitiated in psychiatry, one of the key differentiating diagnostic factors of paranoid schizophrenia vs other disorders is the presence of voices in the patient’s mind telling them what to do. In Norway where I live, such patients are normally admitted to a mental hospital and receive long-term care to help them recover from their illness.
Langman’s book on the subject can be found here:
However, the authors did in fact go on to do a reasonably plausible study. The sample size is rather small, it has not been repeated so cannot be given definitive credibility, and unfortunately the control group was only 9% of the sample size. These are major flaws, but I think this study represents one of the best and most thorough scientific surveys carried out to date, and could very well be used as a basis for other scientists to repeat on a much greater scale.
Anderson and Dill define a model of aggression they call GAAM. Their theoretical model takes environmental and psychological factors into account, and was tested in a number of studies, of which I’ll just mention the first here. This study used 227 psychology undergraduate gamers. They were asked to fill out questionnaires detailing how violent the games they play are on a scale of 1-7, the genres of each game and the time spent on each one on average. They found various results and I don’t want to misrepresent them so I recommend you read the article, but in brief summary they found that those with aggressive personalities as defined by the scales were more likely to be influenced by violent video games. However the most interesting finding was that time spent playing video games had a far more negative impact on delinquency and high school grades, whereas the correlation between exposure to violent games and delinquency and high school grades was statistically insignificant. Of course, this is a pretty obvious conclusion – kids shouldn’t be skipping school to play games. Is a sample size of 227 enough to be conclusive? Probably not, unfortunately.
There is also another key problem with this interview technique: the participants themselves are left to decide how violent the games they play are, and to report accurately how long they spend playing them. This is a trade-off: we cannot keep people in isolated bubbles for years, but we can’t entirely trust human memory and judgment either – which pushes the error rate up considerably and is an example of why multiple testing methods (vis-a-vis repeatability) are crucial.
In their psychoanalytical view, Anderson and Dill state:
“We do not, however, expect that playing violent video games will routinely increase feelings of anger,
compared with playing a nonviolent game. To be sure, playing a frustrating game is likely to increase anger. Violent content by itself, however, in the absence of another provocation, is likely to have little direct impact on affect. ”
The key words there are “in the absence of other provocation”.
- The National Institute on Media presents a significant body of studies – http://www.mediafamily.org/videogame2006summit/publications.shtml
Craig A. Anderson, Leonard Berkowitz, Edward Donnerstein, L. Rowell Huesmann, James D. Johnson, Daniel Linz, Neil M. Malamuth and Ellen Wartella summarise a wide clutch of studies in this article: http://www.psychology.iastate.edu/faculty/caa/abstracts/2000-2004/03ABDHJLMW.pdf
Their summary – which is based on a wrapping up of the conclusions of many other studies rather than their own direct work – states, among other things:
“Research on violent television and films, video games, and music reveals unequivocal evidence that media violence increases the likelihood of aggressive and violent behavior in both immediate and long-term contexts. The effects appear larger for milder than for more severe forms of aggression, but the effects on severe forms of violence are also substantial (r = .13 to .32) when compared with effects of other violence risk factors or medical effects deemed important by the medical community (e.g., effect of aspirin on heart attacks). The research base is large; diverse in methods, samples, and media genres; and consistent in overall findings. The evidence is clearest within the most extensively researched domain, television and film violence. The growing body of video-game research yields essentially the same conclusions. ”
One can of course twist that paragraph to mean many things: “video games are no worse than other forms of media”. “It was already known that TV and films influence kids so why are we surprised that video games do too”. They also note that larger sample sizes are still needed to provide conclusive proof regarding video games. Their advice, however, is telling:
“Regardless of the attempts made to limit the amount of violence reaching American families, those families themselves are clearly critical in guiding what reaches their children. Whether by adopting V-chip technology for home TV programming, subscribing to voluntary violence screening by Internet providers, or simply monitoring closely children’s use of TVs, computers, and video games, parents can reduce and shape their children’s consumption of violent media. Communities – including schools, religious organizations, and parent-teacher organizations – can teach parents and children how to be better, healthier consumers of the media.”
In other words: be a better parent. Unfortunately however, it is a real shame that after their extremely detailed 105-page dossier, their conclusion simultaneously shows naivety in believing that an ISP can restrict violent traffic, and an equally naive belief that parents can realistically screen their children. It only takes one “bad” parent to destroy the system; if you are a parent, you know exactly what I’m talking about: you forbid your child to play Modern Warfare 2, so he goes to his mate’s house and plays it there, then comes back to you and whines incessantly about it until you buy him a copy just to get some peace. And even if you don’t, he’ll still be playing it somewhere else anyway, and you’re not in the mood to start a row with your kid’s friends’ parents.
As you can see, despite the fact the two papers highlighted both involve a fair number of reasonably sound studies, they still manage to draw somewhat contradictory conclusions (although they are in partial agreement), which just demonstrates further why repeatability is so important.
Further research is needed. There is no disagreement among psychologists that nature (genetics) and nurture (environmental and ethnographic factors) both have complex and inter-related roles in the kind of people that are produced as an end result. Some may be violent offenders, other may be mild and benign. Some of them may be more or less influenced by video games than others. Just how much influence games and other media exert against the impossible-to-measure backdrop of a world without any media at all, is still unknown.
I apologize to any statisticians in the audience for the gross simplification of the mathematical aspects of this article.
- Change to Twitter account
- The Future of my Blog: I’m Still Alive
- LightSwitch for Games Part 4: OData Access from C++ Client Code with the C++ REST SDK
- How to statically link the C++ REST SDK (Casablanca)
- Simple2D 1.13 now available
- Final Wishes: Crowdfund Update
- Dying with M.E. as a software developer
- 2D Platform Games Part 12: A Framework for Interactive Game Objects
Top Posts & Pages
- Printing numbers in binary format in C++
- Tutorial: How To Fix WMI Corruption
- 2D Platform Games Part 1: Collision Detection for Dummies
- XInput Tutorial Part 1: Adding gamepad support to your Windows game
- C++11: Using std::unique_ptr as a class member: initialization, move semantics and custom deleters
- PHPCron: Running scheduled tasks from PHP on a web server
- Dying with M.E. as a software developer
- Coding Challenge: Write Asteroids in 10 hours or less
- Introduction to Multi-Threaded, Multi-Core and Parallel Programming concepts
- Katy's Code
- Blog Updates (10)
- Bluetooth (2)
- Filesystems (3)
- NTFS (3)
- IT Industry (3)
- Learning To Code (2)
- Media (5)
- Operating Systems (15)
- Programming (67)
- Science (5)
- Software (18)
- Video Games Industry (57)
- July 2015 (2)
- April 2014 (2)
- March 2014 (2)
- January 2014 (3)
- November 2013 (3)
- October 2013 (1)
- August 2013 (11)
- July 2013 (2)
- May 2013 (3)
- April 2013 (1)
- March 2013 (4)
- February 2013 (11)
- January 2013 (14)
- October 2012 (4)
- September 2012 (7)
- June 2012 (3)
- May 2012 (12)
- October 2010 (3)
- September 2010 (2)
- June 2010 (10)
- May 2010 (6)
- April 2010 (16)
- March 2010 (11)
- February 2010 (4)
- October 2008 (2)
- September 2008 (2)
- March 2008 (2)
- February 2008 (2)
- January 2008 (3)
- December 2007 (1)
- September 2007 (3)
- August 2007 (1)
- July 2007 (1)
- June 2007 (2)
- April 2007 (1)
- February 2007 (3)
- November 2006 (1)
- October 2006 (3)
- September 2006 (1)
- July 2006 (3)