TLDR; evali uses an adaptive logical reasoning test that to the best of our knowledge is unique in the world. It’s AIG + IRT on the algorithms and we’re really proud!
The story of how evali came to be is intertwined with the story of The Talent Company. And the story of me getting Sara as a colleague. Early on we recognized the need for psychometric support in multiple parts of our business. Since we had skills in organizational psychology as well as the technical skills inhouse we decided to go for it. To make the tool we ourselves would like to buy.
Starting with the personality end of things we could tap into Sara’s extensive experience as a scholar and practitioner. While not always easy, the path to a product was in many ways clear. We knew what we needed to do.
As we then looked at logical reasoning the path was suddenly not as clear. We had to start somewhere and built a classic linear test to get something off the ground and to complete the product. But researching the topic unveiled other, much more exciting possibilities.
When computers entered our history, adaptive tests became feasible and sought after. Having the test taker faced with items at or around their ability makes for an overall more pleasant experience. Leaving the pen and paper world made testing more accessible as well as more efficient.
In the beginning, logical reasoning items were manually created. An expensive and time consuming process. The computerized and adaptive world had an appetite for items not easily satisfied by human hands. You need many items in a pool, spanning the levels of ability you want to measure. Historically this has been one of two really expensive obstacles when creating logical reasoning tests. Could the computer be of help there as well?
In academia, as well as in the industry, Automatic Item Generation (AIG) became a thing. Built on very different kinds of scientific breakthroughs. Like formalized logical transformations connected to cognitive processes and machine learning algorithms.
Behind most adaptive tests lies Item Response Theory (IRT). A field of study aimed at answering the probability of an item giving information on an underlying latent trait. You let a sufficiently big sample do your items and you get a bunch of parameters back. Parameters which then get used to select items at or near the estimated ability for a test taker – i.e. an adaptive test. This has historically been the second really expensive obstacle. You need IRT parameters for all the items in your item pool.
Described in a few paragraphs above is what the world looked like to us embarking on this journey a few years ago. AIG could satisfy the need for many items giving us a big item pool but we would still be stuck with having to perform expensive studies for all generated items we wanted to use.
Remember formalized logical transformations connected to cognitive processes that were mentioned earlier? Would an algorithm randomly generating an item, following a formalized logical transformation, create items with the same psychometric properties and as a consequence partly eliminate the second expensive obstacle? Resulting in a practically infinite item pool. In practice removing the risk of an item being shared outside the test setting. Giving each test taker a truly unique and tailored test experience – generated on the fly. This is a thrilling idea!
As it turns out, we were not alone having this idea. An end to end solution, a complete test, has been made in an educational setting for a different latent trait than logical reasoning somewhere in North America. DrDr Diego Blum in one of his papers suggested that this would be interesting to study (the work of DrDr Blum on AIG has been a huge inspiration for what we’ve done).
So AIG + IRT on the algorithm instead of the generated item then. Our holy grail. Step by step, piece by piece until we could launch this as part of the evali platform ✨.
Photo by Soheb Zaidi on Unsplash.