In A Better You

When Should You Take An Intelligence Test?

Intelligence tests were first introduced in the late 1800s as a way to evaluate new immigrants to America. Over the years, their use has expanded, from assigning jobs in the Armed Forces to determining who should be given access to higher education. Along the way, the use and accuracy of intelligence tests has stirred a lot of controversy.

A Brief History of Intelligence Testing

In the late 1800s, a growing movement called Social Darwinism espoused the belief that some human beings were innately stronger, smarter, and even more attractive than other people. Just as the biologist Darwin had explained that only the very fittest among animals was able to survive and reproduce, it was believed by some that the human race should take steps to ensure that only the most worthy members were allowed to produce the next generation.

In order to achieve this goal, people with this belief decided to come up with a test that would allow them to easily judge an individual (ideally from a young age) to determine his or her intelligence. The purpose of this testing varied among different groups. Some believed that people who tested in a low range could be educated in order to achieve the same things as their peers who tested better. However, others believed that this test would provide a simple way of determining everything from job placement to possible marriage partners.

Discrimination in Testing

One of the first uses of this type of testing was taken up my American and European immigration agencies. Facing public criticism from citizens that immigrants were a drain on society, these agencies set out to develop an easily administered test that could be used to ensure that only immigrants of the highest quality would be let in to the country.

These early tests were crude, based on virtually no science, and designed more to appease politicians than actually measure anyone’s intelligence. Immigrants from countries deemed less desirable were often instructed to read pages of material in a language they barely understood. On the other hand, immigrants from countries deemed more desirable (or those with enough money to pay to get out of taking the tests) were barely even tested at all.

This lack of accountability did very little to assuage the public’s demand for such tests. By the early 1900s, many doctors, scientists, and other educated people began to produce their own tests. In France, two men, Binet and Simon developed a test that was to be administered to French school children. The test consisted of a number of questions testing memory, observation, and problem solving. The Binet-Simon Intelligence Scale soon became the preferred intelligence test in Europe.

By 1916, the test came to America, where a professor at Stanford University adapted it for American children. This test was known as the Stanford-Binet Intelligence Scale, and it soon became the standard for all intelligence testing in America.

War and Intelligence

The outset of World War I brought a renewed interest in IQ testing. The need to raise large armies very quickly meant that the military needed a test that would determine which men were best suited for each job. The Army Alpha and Beta tests were devised to meet this goal. Every recruit was given one of these tests, and then assigned a job based on his score.

This test was considered progressive for its time because it took into account the fact that some men who were not taught to read could still be trained to higher level positions. It’s important to note, however, that there were no provisions made for the different cultural, economic, or racial backgrounds of any of the test takers. In America, this often meant that minorities were assigned the most menial jobs.

By the 1920s and 1930s, these tests were used to make sweeping generalizations about various ethnic groups. This, combined with a growing eugenics movement, was one of the factors cited as a justification for the extermination of various ethnic minorities throughout Europe by the Nazis.

For this reason, many IQ tests fell out of favor at the end of World War II, as many scientists who saw the horrors of the war began to realize the unintended consequences of the tests. Nonetheless, by the late 1950s, education reformers began calls for new tests that would measure the readiness of a child for school.

Over the next several decades, several different tests were proposed, but this time each one was scrutinized and in many instances criticized for being more of a measure of a child’s exposure to education, vocabulary, and other concepts rather than a true test of innate intelligence. The inability to produce a test that would overcome this hurdle meant that by the 1980s, most school districts stopped the practice of IQ testing for all children. Today, the test is mostly given in specific instances to determine a child’s need for specialized educational services.

Today’s IQ Tests

Intelligence tests today tend to focus on the best ways that a person learns rather than trying to assign a number that expresses to other how smart that person is. The most common example of this is known as the Multiple Intelligence Test. This test was originally designed to determine the preferred method of learning in young adults, but it was quickly adapted to be used with kids.

The multiple intelligence test for kids is a tool that is meant to help teachers determine the best way for a child to learn. It is often administered at the beginning of a child’s schooling. The results can be used to group children into classrooms that are designed to cater to their specific learning style.

The multiple intelligence test recognizes that there are different types of intelligences, including Logic/Math, Nature, Social, Musical, and Spatial. It assigns scores in all of these areas, but it is suggested that areas with the highest scores are a person’s preferred way of learning.

There are also a number of tests that judge personality. These so-called “fluid” intelligence tests are meant to identify an individual’s strengths rather than assign them a score.

It’s important to realize that scores on these tests can change as a person grows and is exposed to different ideas. For example, a child who scores low in Logic and Reasoning can improve his or her score by working on math and logic problems. It is believed, however, that if a person shows no great interest in these subjects, it will be very hard for him or her to improve their score in this area.

How to Get Tested

Since IQ tests are no longer routinely administered in most public schools, it is common for a person to not know his or her IQ. There are a number of websites that offer so-called intelligence tests, but their accuracy is pretty low. Most of them are simply collections of questions from other tests that ask about pattern recognition and/or vocabulary.

It’s important to realize why an online intelligence test is not accurate. They tend to be more of a test of a person’s familiarity with the types of questions and vocabulary used than their intelligence. Even pattern recognition tests have been shown to be inaccurate among populations where the majority of people have had access to mathematical education. In other words, the simple fact that a person went to school can help them achieve a higher score than a person who did not have that educational opportunity. This does not meant that the first person is smarter, only that they are better at this type of testing.

Despite decades of research, most IQ tests today do a mediocre to poor job of actually evaluating a person’s innate intelligence. While they can be a good indicator of a learning problem, it is true that the tests are still deeply biased and flawed.

Related Posts

Tags Clouds

Comment Here

Leave a Reply

Send Us Message


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>