Published on the 22/08/2023 | Written by Heather Wright
Changing skills, and new challenges for CIOs…
Fumbi Chima is adamant AI, and generative AI, isn’t going to take our jobs.
To be clear, Chima, a global technology, transformation office and board member, is open that AI will replace some jobs and require reskilling – but human interactions and input will continue to be critical in work, now and into the future.
In New Zealand this week for CIO Summit where she was a keynote speaker, Chima told iStart AI’s best use lies firmly in being used in conjunction with humans.
“The machines will absolutely beat us in efficiency, but they cannot do the same in quality.”
“There are some aspects of it which are a replacement – it can and will create efficiency in process automation – but the majority of it is not a replacement and we still need humans to validate AI’s assumptions,” she says.
“We have this mindset that AI is going to take over the world and jobs are going to go away. But what we are going to be doing is really more about reskilling, and it’s going to create opportunities globally and have a positive impact on the economy because as with anything else, we’re creating whole new diverse stills and that in itself will create its own global economy and positive global economic impact.
“And importantly, we still need the human interactions, because AI is still biased, there are still hallucinations that we need to rephrase.
Chima says it’s up to all of us to step up to the plate and ensure we use AI to fulfil the promise technology was supposed to deliver, in improving current conditions.
“We need to use AI not as a replacement, but as a substitute – a way to fill gaps when humans need a mental break.
“When we have AI work with us, instead of having an irreplaceable human element we can find greater success while not forgetting the path that brought us to this form of innovation in the first place,” she says.
Human skills not possessed by AI, including discernment, being able to research and to ask insightful questions and knowing how to phrase those questions will be key in making sure we’re not led astray by false information AI might spit out.
At the cornerstone of it all is the issue of media literacy – an element that will be increasingly important for humans moving forward.
“In order to thrive in generative AI we need to develop our skills of curiosity or really wanting to explore and understand the world, to give information honestly and directly in a world where corporate mistruths and flattery are even easier for machines to make than humans.”
Diversity too, will be even more important.
“Having different perspectives within computer programming has always been a challenge as most of the technology developed is made by a certain demographic. This causes issues such as face recognition that cannot pick up those who are darker skinned and generative AI being used to promote racist sentiments.
“These issues would be easy for a human to overcome but since computers are limited by their programming there needs to be a diverse range of people there to combat this and future issues with bias that Generative AI will have.
“While it is clear that the machines will absolutely beat us in efficiency, as seen in calculations and now in creative spheres, they cannot do the same in quality. So if the machine can do the job faster, to compete with them and keep the diverse and vibrant world we live in, we need to do the job better.”
Having employees with research skills will also be important, Chima says, enabling organisations to ensure the information they are operating on is correct.
“In addition, being able to have employees who have a greater analytical skill and who understand the deeper part of any message given will be essential in making sure that the wrong message is not accidentally conveyed by your organisation,” she adds.
Despite her optimism for the future of AI, Chima does sound a note of caution, saying while Generative AI can be a vital tool and will revolutionise the way we communicate with each other, it can also be a weapon if used improperly.
“Having the right people working with and for you can correct this issue, specifically people who want to find and fight for the truth which is essential if humanity and history can continue forwards.”
She’s also concerned that not enough time is being spent closely analysing what AI could mean for an organisation – and its customers.
She believes many organisations are rushing headlong into generative AI, hoping it might be a silver bullet, capable of solving any and all issues.
Instead, she’s calling on leaders to pause and consider the true benefits and ramifications.
“CIOs, CEOs and board organisations need to figure out exactly what they need to use this for, which use cases they can use AI for that pertains to their specific problem.
“It’s not the be all and end all for everything. The way we are looking at it even at the board level is: ‘oh, it’s going to replace x or y’, but it can not and should not.
“There is no silver bullet and the problem AI solves for me is very different from the problem it solves for the organisation down the road. And how I utilise data is very, very different.”
Chima believes all businesses have a huge responsibility when it comes to solving the issues of bias in AI.
“Generative AI took everyone by surprise. I equate the acceleration of the technology to being another pandemic. It took us overnight and the adoption was unbelievable. We got so used to that aspect that we didn’t necessarily think about the implications.
“Now we are talking about hallucinations, bias, thinking some of the information is actually incorrect.”
She laughs as she recounts how asking ChatGPT to provide information about her throws up completely incorrect information – including that she is a male.
“I like the thing it has created about me, but none of it is true,” she says, laughing.
“The reality is we are basing it on something that is not curated, not validated, not QA’d and we have to be very mindful of that.”
While Chima can laugh about the assumptions ChatGPT has made about her, biases hold the potential to further disadvantage those that are already disadvantaged. She cites the example of someone from a disadvantaged background applying for a loan, noting the potential need for additional information to help determine the approval.
“AI is not going to do that. And that’s what we’re missing.”
For CIOs there’s an added element.
“Our job is always to continue to create understanding and educate. But it’s not the role of the CIO alone, it’s the role of every individual to stay abreast of what is going on. Our job as CIOs, however, is to protect, because this is data-driven, so CIOs need to guard against it not being used correctly. And that becomes more and more difficult.”
Organisations should set up AI literacy programmes in order to educate all their workers — both technical and non-technicals- about the merits and demerits of AI