Big Hairy, Spaghetti Questions

  • I’m going to answer this question by not answering the question (soz).

    I used to struggle with imposter syndrome, feeling like I had to justify every decision I made. But as I grew more confident (across my work but also my hobbies, like oil painting, music and photography), I realised that we humans learn and refine skills over time, almost always without consciously recognising it. What we call “intuition” is often just thousands of hours of practice, turning learned behaviours into second nature.

    There are a couple of oft-cited examples from psychologist Gary Klein. One example recounts the story of a firefighter who instinctively knew something was wrong and ordered his team out of a house that appeared to have the fire contained. Moments later, the floor collapsed because the primary fire was in the basement, and while he didn't know that, his instincts correctly told him to get out. Similarly, a paramedic didn't like something about how her father-in-law looked one day and sent him to the ER without any apparent symptoms. She saved his life. He had a major blockage and was about to have a heart attack.

    I used to work in the jewellery industry, spending a year in the diamond department of a national jewellery chain before switching to sterling silver department. In those two years, I got very good at differentiating between diamonds and cubic zirconias very quickly. Usually, it was a combination of looking at the setting, clarity, the cut, the surrounding piece, as well as the most subtle differences in the colours reflected (which, to further complicate things, changes based on the colours and lights of the surrounds). A mishmash of clues and vibe to almost always guess correctly. (Please don't test me on this now though as I'm way out of practice!) Same goes for an art dealer or a museum curator who can spot a fake a mile away!

    All of the above examples are humans doing what humans do best, using inductive reasoning to figure things out. Inductive reason is probabilistic, looking at a set of clues or information (e.g. how hot a burning room felt, your father-in-law's cheeks, the subtle pink colour reflected by a gem) and then fairly quickly deciding it is probably XZY.

    This raises an interesting point. We expect AI to provide fully transparent explanations of its reasoning, yet the best AI models are largely inspired by how human cognition works. They rely on this inductive, probabilistic reasoning rather than the deductive, rule-based logic we tend to associate with ‘robots’.

    From a business perspective, full transparency also poses risks. Revealing too much can expose intellectual property, allowing competitors to copy strategies or bad actors who might exploit the system. Toby Walsh argues that instead of demanding total transparency for every AI decision, it’s more important to foster overall trust in the tool itself, a point I strongly agree with. That said, he makes a strong case that AI should be held to higher standards of accountability due to its dispersed decision-making power, and that’s a responsibility we shouldn’t overlook.

    So all that waffle, what's my answer? I don't think AI should be fully transparency at an individual output level. However AI creators need to set up their tool in a way to engender trust by proving it is unbiased and thought out.

  • My feelings on this has changed a lot. Ethan Mollick’s Co-Intelligence second principle is to always keep humans in the loop.

    An exceptionally disastrous case study of this in motion was when the UK’s Department for Education decided to use AI to grade A-level marks due to the inability of students to take their final year exams. This led to many students receiving marks that they felt were very low. Some took issue with unfair considerations in testing, including being graded down if their school had historically lower marks. Funnily, it looked like they had hoped to use the AI model as a second pair of eyes to the normal marking, perhaps to ascertain its accuracy over time for use in years to come. Just not, you know, untested in 2020. They even wrote, “Any future use of AI is likely to take some time and a lot of testing. We are not going to suddenly see AI being used at scale in marking high profile qualifications overnight.”

    Funnily, it looked like they had hoped to use the AI model as a second pair of eyes to the normal marking, perhaps to ascertain its accuracy over time for use in years to come. Just not, you know, untested in 2020. They even wrote, “Any future use of AI is likely to take some time and a lot of testing. We are not going to suddenly see AI being used at scale in marking high profile qualifications overnight.”

    (Side note: I attended high school in Queensland, which until recently, used the ‘OP score’ which actually was an algorithm that sounds very similar to the UK’s 2020 scoring. It was a percentile grading across the whole state, using a combination of our school marks with general aptitude and intelligence tests that everyone takes regardless of subjects. Then it grades people’s intelligence against other students in their school/class and each school against each other to grade people on a bell curve.

    I definitely saw the benefits of the algorithm, scoring higher because I went to a prestigious high school that was extremely academically focused. I also accidentally gamed the system by taking 10 subjects.

  • In progress.

  • Answer in progress.

    But honestly I don’t think we can.

This page was written by a human (this nerd), with a lot of thinking and perspectives from books written by experts in AI and ethics who know a lot more than me.