The systematic nature of laddering gives the impression that it would be easy to automate in software. The reality is more complex. Basic laddering is easy to automate, but anything more sophisticated is better handled by a human being.
The link below leads to some basic upward laddering demonstration software, which shows the key points.
The software lets the user select a domain (e.g. drinking vessels, or places).
It then shows the user a random pair of images from that domain, and asks which of those images the user would prefer, and why.
After the user has given their answer in the first text box, the software ladders upwards another level.
The software then offers the user the options of more laddering with the same domain, or choosing a different domain, or stopping the session. When the user stops the session, they are taken to the Hyde & Rugg home page.
The demonstration software at the link doesn’t keep a record of the user’s answers. In a fully configured software system, the software would offer to keep a record.
This approach can be very useful for helping people clarify their own preferences and higher-level goals, swiftly and anonymously.
Click here to go to the upward laddering software (external site)
Upward laddering software versus manual upward laddering
Basic upward laddering software can be very useful for getting initial insights quickly and systematically. It can also be made available to large numbers of people. For domains such as helping people decide what they want to do with their lives, this can be very useful.
However, basic laddering software struggles with any responses that don’t fit into the expected format. A common example is the response “From what viewpoint?” when you ask someone which option they would prefer and why. A human elicitor can understand and answer that question, for instance by asking which viewpoints are possible, and then selecting one of those viewpoints as a starting point. It’s possible to write software to handle this, but it’s another layer of complexity. Similarly, a common response is for the person to give a list of reasons for why they would prefer one option, instead of giving a single reason. Again, a human elicitor can handle this easily; handling it in software is possible to some extent, but can involve some complex issues in natural language processing.
In practice, people using the software usually learn swiftly to adapt their answers to fit within the software. This usually makes things run more smoothly, but at the cost of missing answers that don’t fit neatly within the software.
Laddering software complements human elicitors, rather than being a substitute for them. We recommend practising with both, and learning when to use which approach.