While we have been excited by our ability to utilise AI in initially very simple ways - we have a ‘Opmodal Virtual Assistant ’ which leverages ChatGPT, ‘educated’ with the Opmodal User Guide - we have some more ambitious use cases we are designing solutions for currently:
Using ChatGPT to read Procedure documents, distil them into SIPOC representations of flows – and inject these directly into Opmodal
Parsing Python-condensed Visio swimlane diagrams (in .VDX format) – and, as above, generating a series of connected SIPOCs in Opmodal
Extending our Opmodal AI Assistant / Help Bot to provide contextual advice to users: which Processes or RAID items may require the most pressing attention, or how to improve a Process Health Score.
While we are excited about developing these capabilities – and being in the fintech world, it currently feels like if you are not developing AI-enhanced tooling then you may as well go home – conversations with a number of clients recently have served as a timely reminder to take a beat and ensure we consider the data / IP leakage considerations of organisations we seek to engage with.
It will take time for organisations to decide how these tools can safely be used, how to define what can and cannot be passed into them. From an Opmodal perspective – as excited as we are about implementing these tools – as a minimum we can see that we should make them optional / switchable for clients.
I heard someone point out recently that nobody mentions the Turing test any more.
If you are not familiar, The Turing test, originally called the imitation game by Alan Turing in the 1950's, was conceived is a test of a machine's ability to exhibit intelligent behaviour indistinguishable from that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test.
Out of curiosity, I checked the 2021 view on how close or otherwise we were at that point to constructing a model which would be capable of passing that benchmark assessment.
“It might happen around 2030 – although some scientists say not earlier than 2040”.
By 2022, Google were firing an engineer who believed the AI they were developing had become sentient…
Now, in the first half of 2023, we already have studies purporting to demonstrate that not only can an appropriately trained LLM (Large Language Model) like ChatGPT 4 surpass human doctors for accurate clinical assessment and recommended treatment of ailments described in written form - but that the AI models exhibited a significantly higher degree of empathetic language than the human GPs.
As Ferris Bueller said: "Life moves pretty fast...."
Comments