Skip to content

On High-Range Test Construction 25: Patrick Liljegren

2024-11-22

 

 

 

 

 

 

 

 

 

Publisher: In-Sight Publishing

Publisher Founding: March 1, 2014

Web Domain: http://www.in-sightpublishing.com

Location: Fort Langley, Township of Langley, British Columbia, Canada

Journal: In-Sight: Independent Interview-Based Journal

Journal Founding: August 2, 2012

Frequency: Three (3) Times Per Year

Review Status: Non-Peer-Reviewed

Access: Electronic/Digital & Open Access

Fees: None (Free)

Volume Numbering: 13

Issue Numbering: 1

Section: E

Theme Type: Idea

Theme Premise: “Outliers and Outsiders”

Theme Part: 32

Formal Sub-Theme: High-Range Test Construction

Individual Publication Date: November 22, 2024

Issue Publication Date: January 1, 2025

Author(s): Scott Douglas Jacobsen

Word Count: 5,350 

Image Credits: Patrick Liljegren.

International Standard Serial Number (ISSN): 2369-6885

Please see the footnotes, bibliography, and citations, after the publication.*

Abstract

Patrick Liljegren is a member of the Synaptiq Society and The Glia Society. He achieved a 171 on the IQ test OASIS. Patrick is passionate about audio system tweaks, including the use of crystals. He enjoys eating banana ice cream daily and braving the winter without a jacket. His philosophy emphasizes self-improvement, perseverance, and the importance of having fun. Liljegren’s passion for test construction developed from his interest in cognitive processes and personal experience with traditional IQ tests, which felt limited in measuring deeper logical reasoning. Dissatisfied with repetitive and uninspiring test formats, Liljegren sought to create engaging and enjoyable tests that foster cognitive growth and reflect true intellectual ability. He emphasizes the importance of avoiding biases, maintaining rigorous test design, and ensuring test reliability. His focus is on holistic, multi-domain questions that stimulate deeper problem-solving. For valid results, he values participant engagement and careful test scoring, addressing potential errors and aiming to support test-takers’ growth and confidence.

Keywords: bias in test creation, abstraction importance, heterogeneous tests, test validity, construct validity, confirmatory factor analysis, reducing ambiguity.

On High-Range Test Construction 25: Patrick Liljegren

Scott Douglas Jacobsen: When did this interest in test construction truly come forward for you? 

Patrick Liljegren: This interest in test construction emerged as a natural evolution of my lifelong passion for understanding cognitive processes. After taking numerous IQ tests myself, I realized there was a distinct kind of logic I was using—one that allowed me to engage with problems on a deeper level than these tests seemed to measure. Each test left me with the sense that there was something more to explore, a way of thinking that wasn’t captured by standard approaches.

Over time, I grew increasingly curious about crafting an alternative that included this “deeper logic.” I wanted to create tests that not only challenge conventional boundaries but also engage test-takers with a format that’s more enjoyable and dynamic than rows of text or numbers. In combining rigor with creativity, I hoped to produce tests that would captivate people who, like myself, crave an experience that reflects the true intricacies of high-range cognition. This blend of challenge and engagement was a calling that I couldn’t ignore, and that’s when I truly began dedicating myself to test construction. 

Jacobsen: At the time, what were the realizations about the tests and the need to develop yours?

Liljegren: My biggest realization about other people’s IQ tests was that they were, frankly, very boring for most people. The majority of the tests I encountered were filled with repetitive tasks—just text and numbers—making the experience feel more like a tedious obligation than an engaging intellectual challenge. It became clear to me that this format didn’t capture the interest of participants, and I could see why many would be unwilling to invest hours on end in something so monotonous. This realization drove me to create my own tests, ones that would not only be intellectually stimulating but also fun, encouraging people to engage deeply with the process without feeling like it was a chore.

Jacobsen: What are common mistakes in trying to make high-range tests valid, reliable, and robust?

Liljegren: A common mistake in creating high-range IQ tests is neglecting the participant’s experience, which can lead to a lack of engagement. If the test is so boring or tedious that people don’t want to spend the necessary time on it, then the test fails to be truly valid. Cognitive ability cannot be accurately measured if participants are rushing through questions or abandoning the test early. A valid test needs to capture the participant’s full cognitive range, which means making it not only intellectually challenging but also engaging enough to keep the participant involved. Without this engagement, the test becomes more of a hurdle than a meaningful assessment.

In the past, when I was a teenager, I took an IQ test administered by a psychologist. At the time, I wasn’t particularly interested in the test, so I rushed through it, not fully engaging with the questions. The result was that I received a score which they deemed as ‘retard level,’ and the feedback was devastating. However, this experience was incredibly eye-opening for me. It made me realize how much external factors—like lack of interest or engagement—can skew a test result, leading to an inaccurate and harmful assessment of one’s abilities. That experience, while painful, inspired me to create tests that are not only challenging but also engaging enough to allow people to truly demonstrate their cognitive potential, free from the constraints of traditional, often discouraging, testing methods.

Jacobsen: What are the counterintuitive aspects in taking tests and making tests in the high-range?

Liljegren: A major issue with traditional tests is that people are often more focused on completing them as quickly as possible to achieve a higher IQ score. They rush through the questions without taking the time to dive deeper into the problems, which leads to surface-level answers. This tendency to prioritize speed over depth contributes to the overall boredom with text- and number-based tests, preventing individuals from fully exploring the underlying logic or the deeper meanings within the questions.

In contrast, my tests break away from this model by incorporating images and humor, which serve to keep participants engaged and entertained. This approach fosters a sense of enjoyment and curiosity, encouraging participants to delve into the problem-solving process with a genuine desire to explore deeper, rather than just rushing to finish. The inclusion of humor makes the experience more relatable and less daunting, while still maintaining intellectual rigor, creating a more enjoyable and effective testing experience.

Jacobsen: What are the core abilities measured at the higher ranges of intelligence or as one attempts to measure in the high-range of ability?

Liljegren: The core abilities associated with high-range intelligence are not about speed, as calculators and computers are not truly intelligent. Real intelligence involves the ability to approach a problem from multiple perspectives and select the most logical solution. It’s about having a broad understanding of various concepts and identifying the most reasonable choice. Rushing to solve something often limits the range of perspectives considered. True intelligence requires taking the time to explore different ways of solving the same problem, which can take thousands of hours. It’s akin to exploring a forest and visiting every part to fully understand it and create a complete mental map. This process of deep exploration is essential for making the best decision.

Jacobsen: In an overview, what skills and considerations seem important for both the construction of test questions and making an effective schema for them? 

Liljegren: If the author of a test has certain mental limitations or a narrow understanding of basic human behaviors, it can lead to a biased, limited test. This test would reflect the author’s own cognitive framework, potentially making it skewed toward a specific way of thinking. If such a test is used, individuals who score highly may be seen as having similar mental characteristics or limitations, which could simply be a result of the test’s narrow perspective.

In this case, the high scores would no longer represent general intelligence or cognitive flexibility, but rather a shared bias or limitation that the test fails to account for. This would be problematic, as it would reinforce the cognitive limitations of both the author and the test-takers, rather than providing a comprehensive measure of intelligence.

For a test to be valid and accurate, it must be free from these personal biases, ensuring it measures general intelligence, not a particular mindset or cognitive limitation.

Jacobsen: Any thoughts on proposals for dynamic or adaptive tests rather than–let’s call them–“static” tests consisting of a single item or set of items presented as a whole test, unchanging, instead of a collection of algorithmically variant or shifting items adapting to prior testee answers in a computer interface?

Liljegren: I believe the most effective dynamic test would be a virtual world where time is infinite, and the participant cannot escape until they have successfully solved the test. In this environment, the test taker’s intelligence would be measured based on the choices they make and the paths they explore as they interact with the world around them.

The absence of time pressure is crucial. Just like in real life, there’s no strict timeline for decision-making, and rushing would only limit the depth of exploration. Early choices might lead the participant down a seemingly wrong path, but without a time limit, they would have the opportunity to revisit earlier decisions, re-evaluate their choices, and learn from their mistakes. This reflects the process of growing through experience and finding the best path through reflection and exploration.

The world would be locked, meaning the participant cannot escape or finish the test until they have solved it in their own way. This would ensure that the participant is fully engaged and has the space to explore every facet of the test, allowing for a deeper understanding of their own problem-solving process and decision-making abilities. The test is not a race—it is a journey where intelligence is reflected in adaptation, learning, and growth over time.

By removing time constraints and providing an infinite amount of space to explore, this dynamic test would truly measure the ability to think critically, adapt to new information, and learn from past choices. The best answers would come not from rushing but from the thoughtful, reflective process of solving problems over time in an ever-evolving environment.

Jacobsen: How do you remove or minimize test constructor bias from tests?

Liljegren: I believe that to minimize bias in test construction, the author needs to possess a high level of general intelligence. The higher the intelligence, the broader and more flexible the thinking, which helps in considering multiple perspectives and reducing narrow-minded bias. Lower intelligence tends to create more biased and limited thinking, which may not resonate with a diverse range of test-takers.

Moreover, the test constructor needs to have a well-rounded understanding of human behavior. A test creator with greater intelligence is more likely to recognize these biases and account for them, ensuring that the test is fair and representative of a wider audience.

Jacobsen: How do we know with confidence many listed norms are, in fact, reasonably accurate on many of these tests? What is the range of sample sizes on the tests, even approximately, now? Practically speaking, for good statistics, what is your ideal number of test-takers? You can’t say, “8,128,000,000.”

Liljegren: I believe the sample size for a test should consist only of individuals who take it 100% seriously. When test-takers are fully engaged and committed, the data collected will be far more accurate and reliable. This ensures that the results reflect the true cognitive abilities of the participants, rather than being skewed by rushed or careless answers.

While a larger sample size can be beneficial for diversity and generalization, the quality of the responses is paramount. A smaller, but more focused group of serious participants will yield more valid and meaningful norms. In essence, the test becomes much more accurate when the sample is composed of individuals who approach it with the same level of seriousness and dedication as athletes competing for a gold medal.

In my opinion, a sample size of around 100 highly engaged participants is ideal for creating accurate and reliable test norms. When the group is larger, such as 500 participants, the level of seriousness tends to decrease, especially for those who find themselves near the bottom of the results. As a result, these individuals may rush through the test or fail to fully engage, which can lead to less reliable data.

In contrast, when the sample size is smaller, like 100 participants, the competition feels more real and the stakes are higher. This creates an environment where test-takers are more likely to commit fully to the process and give their best effort. The focus and dedication of such a group result in more meaningful and precise data, as each participant is genuinely invested in the outcome, much like athletes competing for a gold medal. By keeping the sample size smaller and more engaged, the test is able to capture more accurate measures of intelligence and cognitive ability.

Jacobsen: Is English-based bias a prominent problem throughout tests? Could this be limiting the global spread of possible test-takers of these tests rather than limiting them to particular language spheres? Although, these tests are taken, to a limited degree, in many countries of the world in all/most regions of the world.

Liljegren: I avoid English-based bias in my tests by incorporating numbers and images, which are universal elements that don’t rely on language. This approach ensures that both English and non-English speakers have a fair chance to perform based on their cognitive abilities, rather than language proficiency. While English language knowledge can be an advantage for native speakers, non-native speakers might actually have an advantage in some cases. When they encounter unfamiliar words, they are more likely to look them up, which could reveal subtle clues that a native speaker might overlook, assuming they already know the meaning.

Overall, this approach balances things out and ensures fairness for both groups. By relying on numbers and images, I make sure that the test evaluates true cognitive skills, regardless of the language spoken. This way, the test becomes more universally accessible while still maintaining its reliability across different populations.

Jacobsen: When trying to develop questions capable of tapping a deeper reservoir of general cognitive ability, what is important for verbal, numeral, spatial, logical (and other) types of questions? 

Liljegren: I believe that in designing questions that tap into deeper cognitive abilities, it is crucial to integrate different domains—verbal, numerical, spatial, and logical—into a cohesive, interconnected framework. Rather than treating each domain as separate, questions should challenge test-takers to blend these different types of reasoning, creating a more holistic and real-world relevant measure of intelligence.

In traditional tests, we often see sections devoted solely to one type of reasoning: a verbal section, a numerical section, a spatial section, etc. However, true cognitive ability is more complex. It lies in how well someone can synthesize and apply knowledge across these domains to form a broader, unified understanding. This is akin to solving a puzzle where the pieces are of different shapes and forms—verbal, numerical, spatial, and logical. The challenge comes not from solving each piece individually but from recognizing how they interconnect and contribute to the whole.

This integrated approach mirrors real-world problem-solving, where we constantly draw upon diverse areas of knowledge. To test someone’s true cognitive abilities, we must create challenges that require them to blend these elements and think beyond linear, compartmentalized patterns. It’s about understanding the bigger picture and making connections that others might miss, which is often the hallmark of high-level cognitive processing.

In this way, the test becomes more than just a measure of isolated skills. It gauges how well someone can think creatively and flexibly, applying various types of reasoning in novel ways to solve complex problems. This method of testing challenges individuals to think out of the box, drawing on multiple domains to find the best possible solution—a far more comprehensive reflection of intelligence than traditional, domain-specific tests.

Jacobsen: What are roadblocks test-takers tend to make in terms of thought processes and assumptions around time commitments on these tests? So, they get artificially low scores on high-range tests. 

Liljegren: Many test-takers often fall into the trap of underestimating the time commitment required for high-range tests. They tend to think that they should be able to answer the questions quickly, driven by a sense of confidence or even narcissism. These individuals often assume that their initial answer is correct without fully considering alternative perspectives or exploring the problem deeply. This overconfidence typically leads them to rush through the test, which results in lower scores.

The key to performing well on such tests lies in approaching them with humility. When a person is humble enough to accept that they don’t have all the answers and that there could be multiple ways of thinking about a problem, they tend to spend more time reflecting on each question. Rather than sticking rigidly to their first choice, they’re open to exploring different avenues and rethinking their responses. This deeper, more methodical approach leads to better performance, as it allows them to tap into a broader range of insights and avoid missing crucial clues that could improve their answers.

So, in essence, those who approach a high-range test with an open mind and a willingness to consider all possibilities—without rushing or prematurely settling on answers—are far more likely to succeed.

Jacobsen: What is the intended age-range for high-range tests? How do these account for individuals younger and older than this range?

Liljegren: I believe that the intended age range for high-range tests often reflects the cognitive and emotional maturity required to fully engage with them. Younger individuals tend to rush through questions and make quick decisions, often due to a lack of experience or an overestimation of their ability to answer immediately. As people grow older, they gain a sense of relaxation and wisdom that allows them to approach problems more thoughtfully. This maturation process helps individuals realize they don’t have all the answers right away, which leads them to spend more time considering different perspectives and refining their responses.

When I was younger, I would only spend a couple of hours on each test, but now, after years of experience, I dedicate thousands of hours to fully exploring every test I take. This shift in approach illustrates how cognitive growth and emotional development over time lead to better results on high-range tests.

Jacobsen: What is important in constructing and norming a test? 

Liljegren: When constructing and norming a test, one crucial factor that is often overlooked is the cognitive growth that occurs during the testing process. A test that promotes cognitive development as the participant moves through it is not only more engaging but also yields more accurate results. This dynamic approach ensures that the test-taker’s cognitive ability is allowed to evolve, which, in turn, enhances the reliability of the results.

Cognitive Growth During the Test:
One of the most important elements in test construction is ensuring that the test encourages growth in cognitive ability while the participant is engaging with it. This process involves crafting questions that require test-takers to think critically, adapt their strategies, and explore new methods of problem-solving as they progress through the test. By introducing progressively more challenging and thought-provoking questions, the test encourages test-takers to evolve their thinking, enhancing their problem-solving ability and, in turn, their cognitive growth.

This improvement is especially important because it directly influences engagement. When a test-taker sees their cognitive abilities growing during the test—when they feel that they are not just answering questions but also becoming more intelligent throughout the process—they are far more likely to invest the necessary time and focus to fully engage with the test. This increased focus and effort can lead to a more accurate and comprehensive assessment of their potential, as they are operating at their maximum cognitive capacity.

Engagement and Accuracy:
As a test-taker becomes more engaged in the process and experiences cognitive growth, they are more likely to take the time to consider their answers carefully and explore multiple perspectives before finalizing them. This is where the real value of cognitive growth comes into play: when participants are learning and improving as they work through the test, their final answers are more reflective of their true cognitive ability. They are less likely to rush through questions, make careless errors, or settle on superficial solutions.

In contrast, tests that are too short, or lack this cognitive growth element, may encourage rushed decision-making, ultimately leading to less accurate results. In such cases, the test may not fully capture the test-taker’s potential, and the results could be skewed by the lack of cognitive engagement. Therefore, a test should not only measure raw ability but also stimulate growth throughout its duration. By doing so, test-takers’ cognitive abilities are fully exercised and measured at their peak.

The Importance of Test Length:
For this process to take place, the test needs to be long enough to allow for meaningful cognitive growth. If the test is too short, test-takers will not have sufficient time to experience this transformation. As the test progresses, their problem-solving skills improve, which should be reflected in their answers as they revisit and reconsider earlier questions. This iterative process ensures that their final performance represents a more accurate picture of their cognitive abilities.

By fostering cognitive growth during the test, you are not simply assessing the static intelligence of the participant; you are capturing the dynamic nature of their cognitive abilities. This allows for a much more nuanced and accurate understanding of their intelligence, which is crucial when norming the test. This approach can lead to more meaningful norms, as test-takers are measured based on their full cognitive potential, not just their initial capacity.

In summary, test construction and norming should go beyond merely measuring cognitive ability at a fixed point in time. By designing tests that promote cognitive growth, you engage test-takers in a deeper and more meaningful way, which not only improves their performance but also leads to more accurate, reliable, and comprehensive results. This dynamic approach is essential for creating a test that truly measures the depth and breadth of human intelligence.

Jacobsen: Cheaters exist. Frauds exist. How do you a) deal with frauds and cheaters on tests and b) prevent fraud and cheating on those tests? 

Liljegren: I believe that the key to preventing cheating on IQ tests lies in making the test engaging and enjoyable. People tend to cheat when they find the test boring, as they simply want to finish it as quickly as possible, similar to how one might skip through a dull movie. However, if the test is fun and feels like a rewarding journey, participants are far less likely to rush through or cheat.

When the test is designed in such a way that it encourages deep thought, curiosity, and cognitive growth, test-takers are naturally more invested in the process. This engagement reduces the temptation to take shortcuts, as participants are more interested in exploring and solving the problems presented. By making the experience fun and stimulating, you not only prevent cheating but also improve the quality of the data collected.

In essence, if the IQ test becomes an enjoyable challenge, much like a game or an intellectual journey, participants are far less likely to cheat and more likely to put forth their best effort. This approach ensures that the results reflect their true cognitive abilities, rather than rushed or dishonest attempts to finish quickly.

Jacobsen: What is an efficient means by which to ballpark the general factor loading of a high-range test?

Liljegren: To efficiently estimate the general factor loading of a high-range test, the test should incorporate a variety of question types that tap into multiple forms of intelligence and cognitive processes. This ensures the test measures a broad spectrum of abilities, including verbal, numerical, spatial, logical, creative, and abstract thinking. Using only one style of questions—such as rows of text or numbers—limits the scope of intelligence being tested, and can lead to a narrow, predictable response pattern.

Additionally, relying on the same question types repeatedly can result in a learning effect, where test-takers begin to predict the types of questions and answers. This skews the test’s validity, as the test-taker’s experience may be based more on familiarity with the format rather than actual cognitive ability. Therefore, introducing a diverse range of question formats prevents this issue, ensuring that the test captures a fuller, more accurate measure of the general factor of intelligence.

Jacobsen: What is the most precise or comprehensive method to measure the general factor loading of a high-range test, a superset of tests, or a subset of such a superset?

Liljegren: The most comprehensive and precise method to measure the general factor loading of a high-range test is by employing a superset approach, which integrates a variety of subsets. The superset allows for a more holistic view of intelligence by encompassing a diverse array of cognitive abilities, such as numerical, verbal, spatial, and logical reasoning. This broad scope provides a more accurate measurement of general intelligence (g) because it evaluates a wide range of cognitive processes that overlap and interact.

By using a superset, the test becomes dynamic, capturing the interconnections between different cognitive domains. Knowledge from one subset can inform and enhance performance in another, allowing you to form a fuller, more nuanced understanding of a person’s intellectual capacity. This approach not only reduces bias but also prevents the predictability of answers that can arise when a test is too narrowly focused.

Moreover, a superset allows for greater accuracy and robustness in general factor loading by avoiding the limitations of focusing on a single type of reasoning. By examining multiple subsets together, you provide a more comprehensive measure of cognitive ability, reflecting the complex interplay of various intellectual skills.

In summary, a superset ensures that you’re capturing the full range of human intelligence, minimizing the biases associated with narrowly focused tests, and providing a more complete and dynamic assessment of general cognitive ability.

Jacobsen: What seem like the most appropriate places for people to start when taking your tests–taking into account their own skill sets, or others’ tests for that matter?

Liljegren: My tests are designed to be accessible even to individuals with no prior exposure to IQ testing. The key idea is that as the test-taker progresses, their IQ naturally increases through the process. Each part of the test is interconnected, offering clues within the test itself to help guide them toward solving other sections. Rather than presenting isolated questions, the test is structured as a unified experience where everything fits together, fostering both growth and understanding as they move forward. This approach ensures that the process of taking the test is not only a challenge but also a journey of discovery.

The journey through the entire test is genuinely fun and rewarding. With each question solved, there’s a sense of accomplishment and often laughter, which keeps you engaged and eager to continue. The satisfaction of cracking a question creates a sense of excitement, motivating the test-taker to push forward until they’ve solved it all. The tests are created with the intention of helping people increase their intelligence, not simply taking their money by leading them to believe they are correct when they aren’t. This journey isn’t just about testing; it’s about expanding cognitive abilities in an enjoyable, engaging, and fulfilling way.

Jacobsen: What tests and test constructors have you considered good?

Liljegren: I believe that many tests I’ve encountered are designed not with the intention of fostering genuine intellectual growth, but rather to exploit the test-taker’s desire for validation and to profit from their repeated attempts. These tests often provide immediate validation to make the participant feel correct, only to later disappoint them, leading to the common practice of encouraging a second (and often third) attempt to “fix” their results. This cycle is not about true intelligence testing but about encouraging further payments by exploiting a psychological pattern: the desire to prove oneself right and gain recognition from peers.

This type of testing is harmful because it focuses on validation rather than education. It relies on participants’ egos, motivating them to pay again to prove they’re capable, rather than helping them grow. This creates a cycle where the person is encouraged to rush through the test for validation, only to feel let down and encouraged to submit another payment for another attempt.

In contrast, my tests are designed to be engaging, fun, and intellectually rewarding, with the goal of fostering actual cognitive growth. The experience is meant to be so enjoyable and fulfilling that test-takers don’t want to stop. The aim is to encourage them to fully immerse themselves in the test, where they are learning, exploring, and growing their IQ as they progress. The focus is not on tricking participants or manipulating them for financial gain but on offering a genuine opportunity to develop and discover new intellectual perspectives.

A good test should be an experience that challenges and encourages cognitive growth, one that leaves the test-taker with a sense of accomplishment and a desire to keep going. It’s about helping them learn, not about creating a system where they’re trapped in a cycle of disappointment and further payments.

Jacobsen: What have you learned from making these tests and their variants?

Liljegren: I spent three years on two different tests, working on them simultaneously and dedicating 2000 hours to each. When I submitted them at the same time, something very interesting happened: I scored my all-time high on one test, but my all-time low on the other. This experience highlighted just how unpredictable and subjective these tests can be.

Even with extensive preparation and effort, the outcome is not guaranteed. The tests are designed in such a way that, despite the time and focus invested, the results can vary dramatically depending on various factors—many of which are beyond your control. This unpredictability demonstrates that intelligence is not solely about raw effort or preparation; it also involves a complex interaction of factors, including problem-solving approach, adaptability, and the ability to navigate unexpected challenges.

Ultimately, this reinforces the notion that high-range tests are inherently unpredictable, and the experience of taking them can vary significantly from one instance to another, regardless of how much effort is put into preparation.

Some authors’ tests feature repeated questions across multiple test versions, likely due to a combination of laziness and a desire to maximize profits. By identifying these repeated questions, I was able to deduce the correct answers through second attempts, as well as spot consistent errors in the scoring of the tests I took. These errors included misspellings of my name, incorrect dates, and discrepancies in my raw scores. The recurrence of these mistakes suggests that many authors are sloppy in scoring, and I must take this into account when submitting my tests.

In such cases, I realized I need to factor in the author’s state of mind during the scoring process. The outcome can vary depending on the author’s circumstances—whether they are distracted, tired, or experiencing stress. To minimize errors, I must plan the timing of my submissions carefully, choosing moments when I anticipate the test author is most likely to score the tests accurately. For mail-based submissions, I also have to consider potential delays or disruptions, such as holidays or issues like mail theft or vandalism, that could impact the delivery or processing of my test. These external factors, which are beyond my control, require careful planning and preparation to ensure the best possible conditions for submitting my tests.

The realization that so many aspects of the process are influenced by factors out of my control has shaped my approach to testing. While it’s frustrating, it also underscores the need to approach the testing process with patience, awareness, and strategic thinking to navigate these challenges effectively.

 

When creating my own tests, my primary goal is always to foster cognitive growth rather than to make quick profits. Too often, the rush to monetize IQ testing leads to burnout among authors, which in turn results in sloppy test scoring and a lack of care in the process. This is something I’m very conscious of, and I make it a point to thoroughly double-check and verify everything I do. Each test I score is done with the utmost care, knowing that inaccurate results can have lasting consequences on someone’s life.

I understand how a poorly scored test can affect a person, particularly when they are already facing difficulties. A mistake on a test could contribute to feelings of inadequacy or frustration, or even worse, lead to a deeper sense of alienation. I am very mindful of this, and it’s why I dedicate hours to ensuring the accuracy of each test I score. I want the experience of taking my tests to be constructive, encouraging, and enlightening for the test-taker, and to give them an opportunity to truly grow their intelligence.

Moreover, I believe that creating tests with integrity, where the scoring is accurate and fair, has a far-reaching positive impact on individuals. People should leave my tests feeling not only more knowledgeable but also more confident in their abilities, as opposed to feeling confused or disheartened by an inaccurate result.

Ultimately, the goal is always to provide an environment where learning is rewarding and enjoyable. This is why I am so meticulous about every detail in the process, ensuring the test is as much a tool for personal growth as it is an intellectual challenge.

Jacobsen: Thank you for the opportunity and your time, Patrick, and thank you additionally for your patience and forgiveness in my delays.

Liljegren: You’re very welcome! I’m glad I could assist, and I truly appreciate your thoughtful words. If you ever need more help or have further questions in the future, don’t hesitate to reach out. Best of luck with your endeavors, and I hope everything goes smoothly from here on out!

Footnotes

None

Citations

American Medical Association (AMA 11th Edition): Jacobsen S. On High-Range Test Construction 25: Patrick Liljegren. November 2024; 13(1). http://www.in-sightpublishing.com/high-range-25

American Psychological Association (APA 7th Edition): Jacobsen, S. (2024, November 22). ‘On High-Range Test Construction 25: Patrick Liljegren’. In-Sight Publishing. 13(1).

Brazilian National Standards (ABNT): JACOBSEN, S. On High-Range Test Construction 25: Patrick Liljegren’. In-Sight: Independent Interview-Based Journal, Fort Langley, v. 13, n. 1, 2024.

Chicago/Turabian, Author-Date (17th Edition): Jacobsen, S. 2024. “On High-Range Test Construction 25: Patrick Liljegren’.” In-Sight: Independent Interview-Based Journal 13, no. 1 (Winter). http://www.in-sightpublishing.com/high-range-25.

Chicago/Turabian, Notes & Bibliography (17th Edition): Jacobsen, S. “On High-Range Test Construction 25: Patrick Liljegren.” In-Sight: Independent Interview-Based Journal 13, no. 1 (November 2024). http://www.in-sightpublishing.com/high-range-25.

Harvard: Jacobsen, S. (2024) ‘On High-Range Test Construction 25: Patrick Liljegren’, In-Sight: Independent Interview-Based Journal, 13(1). http://www.in-sightpublishing.com/high-range-25.

Harvard (Australian): Jacobsen, S 2024, ‘On High-Range Test Construction 25: Patrick Liljegren’, In-Sight: Independent Interview-Based Journal, vol. 13, no. 1, http://www.in-sightpublishing.com/high-range-25.

Modern Language Association (MLA, 9th Edition): Jacobsen, Scott. “On High-Range Test Construction 24: Patrick Liljegren.” In-Sight: Independent Interview-Based Journal, vo.13, no. 1, 2024, http://www.in-sightpublishing.com/high-range-25.

Vancouver/ICMJE: Jacobsen S. On High-Range Test Construction 25: Patrick Liljegren [Internet]. 2024 Nov; 13(1). Available from: http://www.in-sightpublishing.com/high-range-25.

License & Copyright

In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. ©Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use or duplication of material without express permission from Scott Douglas Jacobsen strictly prohibited, excerpts and links must use full credit to Scott Douglas Jacobsen and In-Sight Publishing with direction to the original content.

Leave a Comment

Leave a comment