The hype and doom of generative AI models such as ChatGPT often leave out the work ahead before their predictions can happen. While these new large language models will be transformative, they will have to address numerous technical, social and economic challenges.
Arjun Ramani and Zhengdong Wang describe many of these challenges in Why transformative artificial intelligence is really, really hard to achieve on the site The Gradient.
They discuss the challenges of AI creating productivity gains in areas such as healthcare, education, and construction that is labor intensive. The fine motor control of robots are not advancing as fast as large language models that still require millions of humans to annotate and train these AI models. They remind us that a lot of process knowlege (that AI models will use) is not written down anywhere citing Michael Polanyi saying “that we can know more than we can tell.”.
There are many AI challenges including data quality with common ontologies, securing data quanity, data sources (local vs. industry), model reproducibility, caputuring inputs and outputs that might not be digitized, privacy, data rights, trust, transparency and transformation of current processes. I have no doubt these will get addressed. How they get addressed will determine the promise and perils of AI.
Recent Comments