By Bogi Szalacsi - Senior Associate - infoNation
Artificial intelligence is now the undisputed star sector of the tech industry. Billions of dollars, pounds, yuans and euros are being pumped into AI research, development and business and the projected contribution of the sector for 2030 (that’s just 11 years away) is 15 trillion dollars.
A growth rate this large will inevitably brings pitfalls with it. Pitfalls, dead ends and less than ideal solutions can bring disappointment, bad publicity and without a doubt millions of working hours and investments lost.
What can the AI sector learn from other sectors? How can we work smarter to develop responsible machines, systems and algorithms? All of these things will increasingly govern our lives. Below are a few suggestions from lessons learned through hundreds of years of technology development.
We need to develop solutions together
When countries independently work on solutions, often in secrecy, no matter how great the outcome is for their individual efforts, it can turn into a mess when others try to adopt it. Notable failures that still plague us today include standardisation issues e.g. -The metric vs. imperial measurement systems - Electrical systems around the world (travelers are perpetually frustrated by the differences in voltages of the current and the shapes of the plugs) - Left-hand traffic vs. right-hand traffic.
One would hope that by the end of the 20th century, for our globalised lives, software engineers and architects might think to collaborate better. But this is still not the case. More recently CDMA and GSM mobile phone technologies were adopted independently in different countries. In the United States, some cellular phone providers such as Sprint, Verizon, and US Cellular use CDMA. AT&T and T-Mobile use GSM. Most of the rest of the world uses GSM technology.
Ideally, human values would be more universal
In our diverse world, we need to consider that artificial intelligence will be used by everyone. We cannot be blind to the myriad of languages, religions or lack of, backgrounds and cultures of these future users. Solutions developed in the large global technology hubs need to involve a diverse team in every aspect, from planning to testing. We must reach consensus to minimise discrimination, racism and sexism.
Recently the Pope met with Microsoft Chief Brad Smith to discuss the ethical use of artificial intelligence and how AI can serve the common good of all people. Many more meetings of this kind will be needed with religious leaders and ethics experts from around the globe, and technology experts should listen.
Governments need to get on board with this
We can’t solely expect money-driven companies in different countries to reach solutions that are best for humanity as a whole. The above mentioned GSM technology was mandated in Europe by law in 1987, followed by an industry consortium. This facilitated the global spread of GSM.
Governments are pumping large sums into AI research and education (the British government recently announced a £1 billion pound boost to the sector), but with big money should come ethical oversight as well, to ensure that there will be more to this than profits. The UK has shown some leadership in this area by launching a Centre for Data Ethics and Innovation, but much more needs to be done, globally.
There should not be a “race” - we must strive for responsible solutions, not just fast ones
The word race comes with a hefty historical package (remember the nuclear race?), suggesting that the first participant reaching the goal is somehow more successful than the others. But the development of AI solutions should not be a race. Quality, and the benefit of humankind should be the goal, not speed.
Countries like a friendly competition here and there and that’s taken us ahead and far at times in the past, during the space race for example. But what started out as a friendly joke about the AI race has now become a buzzword and that’s not the way to go. Currently, China leads in AI adoption. Twice as many enterprises in Asia have adopted AI, compared with companies in North America. They are aided by government engagement, a data advantage and fewer legacy assets. But this could turn, and it probably will.
Education - on all levels, from childhood to adult continuing education
New technologies can only be successful if they can be widely adopted and are continuously developed and improved for the needs of society. Notable cases where the failure to adopt advancements in science and technology due to a lack of proper education has become an issue include the anti-vaxxer movement, climate change deniers and the elderly who are not comfortable internet users.
To ensure that all layers of society benefit from the new technologies AND the future workforce is assured, the education systems need to catch up and keep the pace. This is no easy feat with data science and AI, the fastest developing fields. But it can be done and should be done.
No solution is ever too big to fail!
Remember the Hindenburg Airship? Windows 8? AOL? MapQuest? Failing is OK, as it often paves the way to bigger, better solutions. But artificial intelligence is too important now to mess up and we need to learn from mistakes of the past.
Learn how to master the powerful tools of data science and AI with a six-week, online Southampton Data Science Academy course: