The future of artificial intelligence

Seven ways AI can change the world for better ... or worse

In a nondescript building close to downtown Chicago, Marc Gyongyosi and the small but growing crew of IFM/Onetrack.AI have one rule that rules them all: think simple. The words are written in simple font on a simple sheet of paper that’s stuck to a rear upstairs wall of their industrial two-story workspace. What they’re doing here with artificial intelligence, however, isn’t simple at all.

Sitting at his cluttered desk, located near an oft-used ping-pong table and prototypes of drones from his college days suspended overhead, Gyongyosi punches some keys on a laptop to pull up grainy video footage of a forklift driver operating his vehicle in a warehouse. It was captured from overhead courtesy of a Onetrack.AI “forklift vision system.”

THE FUTURE OF ARTIFICIAL INTELLIGENCE

Artificial intelligence is impacting the future of virtually every industry and every human being. Artificial intelligence has acted as the main driver of emerging technologies like big data, robotics and IoT, and it will continue to act as a technological innovator for the foreseeable future.

Employing machine learning and computer vision for detection and classification of various “safety events,” the shoebox-sized device doesn’t see all, but it sees plenty. Like which way the driver is looking as he operates the vehicle, how fast he’s driving, where he’s driving, locations of the people around him and how other forklift operators are maneuvering their vehicles. IFM’s software automatically detects safety violations (for example, cell phone use) and notifies warehouse managers so they can take immediate action. The main goals are to prevent accidents and increase efficiency. The mere knowledge that one of IFM’s devices is watching, Gyongyosi claims, has had “a huge effect.”

“If you think about a camera, it really is the richest sensor available to us today at a very interesting price point,” he says. “Because of smartphones, camera and image sensors have become incredibly inexpensive, yet we capture a lot of information. From an image, we might be able to infer 25 signals today, but six months from now we’ll be able to infer 100 or 150 signals from that same image. The only difference is the software that’s looking at the image. And that’s why this is so compelling, because we can offer a very important core feature set today, but then over time all our systems are learning from each other. Every customer is able to benefit from every other customer that we bring on board because our systems start to see and learn more processes and detect more things that are important and relevant.”

THE EVOLUTION OF AI

IFM is just one of countless AI innovators in a field that’s hotter than ever and getting more so all the time. Here’s a good indicator: Of the 9,100 patents received by IBM inventors in 2018, 1,600 (or nearly 18 percent) were AI-related. Here’s another: Tesla founder and tech titan Elon Musk recently donated $10 million to fund ongoing research at the non-profit research company OpenAI — a mere drop in the proverbial bucket if his $1 billion co-pledge in 2015 is any indication. And in 2017, Russian president Vladimir Putin told school children that “Whoever becomes the leader in this sphere [AI] will become the ruler of the world.”He then tossed his head back and laughed maniacally.

OK, that last thing is false. This, however, is not: After more than seven decades marked by hoopla and sporadic dormancy during a multi-wave evolutionary period that began with so-called “knowledge engineering,” progressed to model- and algorithm-based machine learning and is increasingly focused on perception, reasoning and generalization, AI has re-taken center stage as never before. And it won’t cede the spotlight anytime soon.

THE FUTURE IS NOW: AI’S IMPACT IS EVERYWHERE

There’s virtually no major industry modern AI — more specifically, “narrow AI,” which performs objective functions using data-trained models and often falls into the categories of deep learning or machine learning — hasn’t already affected. That’s especially true in the past few years, as data collection and analysis has ramped up considerably thanks to robust IoT connectivity, the proliferation of connected devices and ever-speedier computer processing.
Some sectors are at the start of their AI journey, others are veteran travelers. Both have a long way to go. Regardless, the impact artificial intelligence is having on our present day lives is hard to ignore:

Transportation: Although it could take a decade or more to perfect them, autonomous cars will one day ferry us from place to place.
Manufacturing: AI powered robots work alongside humans to perform a limited range of tasks like assembly and stacking, and predictive analysis sensors keep equipment running smoothly.
Healthcare: In the comparatively AI-nascent field of healthcare, diseases are more quickly and accurately diagnosed, drug discovery is sped up and streamlined, virtual nursing assistants monitor patients and big data analysis helps to create a more personalized patient experience.
Education: Textbooks are digitized with the help of AI, early-stage virtual tutors assist human instructors and facial analysis gauges the emotions of students to help determine who’s struggling or bored and better tailor the experience to their individual needs.
Media: Journalism is harnessing AI, too, and will continue to benefit from it. Bloomberg uses Cyborg technology to help make quick sense of complex financial reports. The Associated Press employs the natural language abilities of Automated Insights to produce 3,700 earning reports stories per year — nearly four times more than in the recent past.
Customer Service: Last but hardly least, Google is working on an AI assistant that can place human-like calls to make appointments at, say, your neighborhood hair salon. In addition to words, the system understands context and nuance.

But those advances (and numerous others, including this crop of new ones) are only the beginning; there’s much more to come — more than anyone, even the most prescient prognosticators, can fathom.
“I think anybody making assumptions about the capabilities of intelligent software capping out at some point are mistaken,” says David Vandegrift, CTO and co-founder of the customer relationship management firm 4Degrees.

With companies spending nearly $20 billion collective dollars on AI products and services annually, tech giants like Google, Apple, Microsoft and Amazon spending billions to create those products and services, universities making AI a more prominent part of their respective curricula (MIT alone is dropping $1 billion on a new college devoted solely to computing, with an AI focus), and the U.S. Department of Defense upping its AI game, big things are bound to happen. Some of those developments are well on their way to being fully realized; some are merely theoretical and might remain so. All are disruptive, for better and potentially worse, and there’s no downturn in sight.

“Lots of industries go through this pattern of winter, winter, and then an eternal spring,” former Google Brain leader and Baidu chief scientist Andrew Ng told ZDNet late last year. “We may be in the eternal spring of AI.”

THE IMPACT OF AI ON SOCIETY

‘HOW ROUTINE IS YOUR JOB?’: NARROW AI’S IMPACT ON THE WORKFORCE

During a lecture last fall at Northwestern University, AI guru Kai-Fu Lee championed AI technology and its forthcoming impact while also noting its side effects and limitations. Of the former, he warned:

“The bottom 90 percent, especially the bottom 50 percent of the world in terms of income or education, will be badly hurt with job displacement…The simple question to ask is, ‘How routine is a job?’ And that is how likely [it is] a job will be replaced by AI, because AI can, within the routine task, learn to optimize itself. And the more quantitative, the more objective the job is—separating things into bins, washing dishes, picking fruits and answering customer service calls—those are very much scripted tasks that are repetitive and routine in nature. In the matter of five, 10 or 15 years, they will be displaced by AI.”

In the warehouses of online giant and AI powerhouse Amazon, which buzz with more than 100,000 robots, picking and packing functions are still performed by humans — but that will change.

Lee’s opinion was recently echoed by Infosys president Mohit Joshi, who at this year’s Davos gathering told the New York Times, “People are looking to achieve very big numbers. Earlier they had incremental, 5 to 10 percent goals in reducing their workforce. Now they’re saying, ‘Why can’t we do it with 1 percent of the people we have?’”

RETRAIN & EDUCATE: EASING THE GROWING PAINS OF AN AI-POWERED WORKFORCE

On a more upbeat note, Lee stressed that today’s AI is useless in two significant ways: it has no creativity and no capacity for compassion or love. Rather, it’s “a tool to amplify human creativity.” His solution? Those with jobs that involve repetitive or routine tasks must learn new skills so as not to be left by the wayside. Amazon even offers its employees money to train for jobs at other companies.

“One of the absolute prerequisites for AI to be successful in many [areas] is that we invest tremendously in education to retrain people for new jobs,” says Klara Nahrstedt, a computer science professor at the University of Illinois at Urbana–Champaign and director of the school’s Coordinated Science Laboratory.

She’s concerned that’s not happening widely or often enough. IFM’s Gyongyosi is even more specific.
“People need to learn about programming like they learn a new language,” he says, “and they need to do that as early as possible because it really is the future. In the future, if you don’t know coding, you don’t know programming, it’s only going to get more difficult.”

And while many of those who are forced out of jobs by technology will find new ones, Vandegrift says, that won’t happen overnight. As with America’s transition from an agricultural to an industrial economy during the Industrial Revolution, which played a big role in causing the Great Depression, people eventually got back on their feet. The short-term impact, however, was massive.

“The transition between jobs going away and new ones [emerging],” Vandegrift says, “is not necessarily as painless as people like to think.”

Mike Mendelson, a “learner experience designer” for NVIDIA, is a different kind of educator than Nahrstedt. He works with developers who want to learn more about AI and apply that knowledge to their businesses.

“If they understand what the technology is capable of and they understand the domain very well, they start to make connections and say, ‘Maybe this is an AI problem, maybe that’s an AI problem,’” he says. “That’s more often the case than ‘I have a specific problem I want to solve.’”

REWARDS & PUNISHMENT: AI’S NEAR-FUTURE RAMIFICATIONS

In Mendelson’s view, some of the most intriguing AI research and experimentation that will have near-future ramifications is happening in two areas: “reinforcement” learning, which deals in rewards and punishment rather than labeled data; and generative adversarial networks (GAN for short) that allow computer algorithms to create rather than merely assess by pitting two nets against each other. The former is exemplified by the Go-playing prowess of Google DeepMind’s Alpha Go Zero, the latter by original image or audio generation that’s based on learning about a certain subject like celebrities or a particular type of music.

On a far grander scale, AI is poised to have a major effect on sustainability, climate change and environmental issues. Ideally and partly through the use of sophisticated sensors, cities will become less congested, less polluted and generally more livable. Inroads are already being made.

“Once you predict something, you can prescribe certain policies and rules,” Nahrstedt says. Such as sensors on cars that send data about traffic conditions could predict potential problems and optimize the flow of cars. “This is not yet perfected by any means,” she says. “It’s just in its infancy. But years down the road, it will play a really big role.”

AI AND THE FUTURE OF PRIVACY & HUMAN RIGHTS

Of course, much has been made of the fact that AI’s reliance on big data is already impacting privacy in a major way. Look no further than Cambridge Analytica’s Facebook shenanigans or Amazon’s Alexa eavesdropping, two among many examples of tech gone wild. Without proper regulations and self-imposed limitations, critics argue, the situation will get even worse. In 2015, Apple CEO Tim Cook derided competitors Google and Facebook (surprise!) for greed-driven data mining.

“They’re gobbling up everything they can learn about you and trying to monetize it,” he said in a 2015 speech. “We think that’s wrong.”

Last fall, during a talk in Brussels, Belgium, Cook expounded on his concern.

“Advancing AI by collecting huge personal profiles is laziness, not efficiency,” he said. “For artificial intelligence to be truly smart, it must respect human values, including privacy. If we get this wrong, the dangers are profound.”
Plenty of others agree. In a paper published recently by UK-based human rights and privacy groups Article 19 and Privacy International, anxiety about AI is reserved for its everyday functions rather than a cataclysmic shift like the advent of robot overlords.

“If implemented responsibly, AI can benefit society,” the authors write. “However, as is the case with most emerging technology, there is a real risk that commercial and state use has a detrimental impact on human rights. In particular, applications of these technologies frequently rely on the generation, collection, processing, and sharing of large amounts of data, both about individual and collective behavior. This data can be used to profile individuals and predict future behavior. While some of these uses, like spam filters or suggested items for online shopping, may seem benign, others can have more serious repercussions and may even pose unprecedented threats to the right to privacy and the right to freedom of expression and information (‘freedom of expression’). The use of AI can also impact the exercise of a number of other rights, including the right to an effective remedy, the right to a fair trial, and the right to freedom from discrimination.”

PREPARING FOR THE FUTURE OF AI

HELPFUL OR HOMICIDAL: THE FANTASTICAL POSSIBILITIES OF ARTIFICIAL GENERAL INTELLIGENCE

Speaking at London’s Westminster Abbey in late November of 2018, internationally renowned AI expert Stuart Russell joked (or not) about his “formal agreement with journalists that I won’t talk to them unless they agree not to put a Terminator robot in the article.” His quip revealed an obvious contempt for Hollywood representations of far-future AI, which tend toward the overwrought and apocalyptic. What Russell referred to as “human-level AI,” also known as artificial general intelligence, has long been fodder for fantasy. But the chances of its being realized anytime soon, or at all, are pretty slim. The machines almost certainly won’t rise (sorry, Dr. Russell) during the lifetime of anyone reading this story.

“There are still major breakthroughs that have to happen before we reach anything that resembles human-level AI,” Russell explained. “One example is the ability to really understand the content of language so we can translate between languages using machines… When humans do machine translation, they understand the content and then express it. And right now machines are not very good at understanding the content of language. If that goal is reached, we would have systems that could then read and understand everything the human race has ever written, and this is something that a human being can’t do… Once we have that capability, you could then query all of human knowledge and it would be able to synthesize and integrate and answer questions that no human being has ever been able to answer because they haven’t read and been able to put together and join the dots between things that have remained separate throughout history.”

That’s a mouthful. And a mind full. On the subject of which, emulating the human brain is exceedingly difficult and yet another reason for AGI’s still-hypothetical future. Longtime University of Michigan engineering and computer science professor John Laird has conducted research in the field for several decades.

“The goal has always been to try to build what we call the cognitive architecture, what we think is innate to an intelligence system,” he says of work that’s largely inspired by human psychology. “One of the things we know, for example, is the human brain is not really just a homogenous set of neurons. There’s a real structure in terms of different components, some of which are associated with knowledge about how to do things in the world.”

That’s called procedural memory. Then there’s knowledge based on general facts, a.k.a. semantic memory, as well as knowledge about previous experiences (or personal facts) that’s called episodic memory. One of the projects at Laird’s lab involves using natural language instructions to teach a robot simple games like Tic-Tac-Toe and puzzles. Those instructions typically involve a description of the goal, a rundown of legal moves and failure situations. The robot internalizes those directives and uses them to plan its actions. As ever, though, breakthroughs are slow to come — slower, anyway, than Laird and his fellow researchers would like.

“Every time we make progress,” he says, “we also get a new appreciation for how hard it is.”

IS AGI REALLY AN EXISTENTIAL THREAT TO HUMANITY?

More than a few leading AI figures subscribe (some more hyperbolically than others) to a nightmare scenario that involves what’s known as “singularity,” whereby superintelligent machines take over and permanently alter human existence through enslavement or eradication.

The late theoretical physicist Stephen Hawking famously postulated that if AI itself begins designing better AI than human programmers, the result could be “machines whose intelligence exceeds ours by more than ours exceeds that of snails.” Elon Musk believes and has for years warned that AGI is humanity’s biggest existential threat. Efforts to bring it about, he has said, are like “summoning the demon.” He has even expressed concern that his pal, Google co-founder and Alphabet CEO Larry Page, could accidentally shepherd something “evil” into existence despite his best intentions. Say, for example, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.” (Musk, you might know, has a flair for the dramatic.) Even IFM’s Gyongyosi, no alarmist when it comes to AI predictions, rules nothing out. At some point, he says, humans will no longer need to train systems; they’ll learn and evolve on their own.


“I don’t think the methods we use currently in these areas will lead to machines that decide to kill us,” he says. “I think that maybe five or ten years from now, I’ll have to reevaluate that statement because we’ll have different methods available and different ways to go about these things.”

While murderous machines may well remain fodder for fiction, many believe they’ll supplant humans in various ways.

Last spring, Oxford University’s Future of Humanity Institute published the results of an AI survey. Titled When Will AI Exceed Human Performance? Evidence from AI Experts, it contains estimates from 352 machine learning researchers about AI’s evolution in years to come. There were lots of optimists in this group. By 2026, a median number of respondents said, machines will be capable of writing school essays; by 2027 self-driving trucks will render drivers unnecessary; by 2031 AI will outperform humans in the retail sector; by 2049 AI could be the next Stephen King and by 2053 the next Charlie Teo. The slightly jarring capper: by 2137, all human jobs will be automated. But what of humans themselves? Sipping umbrella drinks served by droids, no doubt.

Diego Klabjan, a professor at Northwestern University and founding director of the school’s Master of Science in Analytics program, counts himself an AGI skeptic.

“Currently, computers can handle a little more than 10,000 words,” he explains. “So, a few million neurons. But human brains have billions of neurons that are connected in a very intriguing and complex way, and the current state-of-the-art [technology] is just straightforward connections following very easy patterns. So going from a few million neurons to billions of neurons with current hardware and software technologies — I don’t see that happening.”

WAR ROBOTS & NEFARIOUS MOTIVES: HOW HUMANS MIGHT USE AGI IS THE REAL THREAT

Klabjan also puts little stock in extreme scenarios — the type involving, say, murderous cyborgs that turn the earth into a smoldering hellscape. He’s much more concerned with machines — war robots, for instance — being fed faulty “incentives” by nefarious humans. As MIT physics professors and leading AI researcher Max Tegmark put it in a 2018 TED Talk, “The real threat from AI isn’t malice, like in silly Hollywood movies, but competence — AI accomplishing goals that just aren’t aligned with ours.” That’s Laird’s take, too.


“I definitely don’t see the scenario where something wakes up and decides it wants to take over the world,” he says. “I think that’s science fiction and not the way it’s going to play out.”

What Laird worries most about isn’t evil AI, per se, but “evil humans using AI as a sort of false force multiplier” for things like bank robbery and credit card fraud, among many other crimes. And so, while he’s often frustrated with the pace of progress, AI’s slow burn may actually be a blessing.

“Time to understand what we’re creating and how we’re going to incorporate it into society,” Laird says, “might be exactly what we need.”

But no one knows for sure.

“There are several major breakthroughs that have to occur, and those could come very quickly,” Russell said during his Westminster talk. Referencing the rapid transformational effect of nuclear fission (atom splitting) by British physicist Ernest Rutherford in 1917, he added, “It’s very, very hard to predict when these conceptual breakthroughs are going to happen.”

But whenever they do, if they do, he emphasized the importance of preparation. That means starting or continuing discussions about the ethical use of A.G.I. and whether it should be regulated. That means working to eliminate data bias, which has a corrupting effect on algorithms and is currently a fat fly in the AI ointment. That means working to invent and augment security measures capable of keeping the technology in check. And it means having the humility to realize that just because we can doesn’t mean we should.

“Our situation with technology is complicated, but the big picture is rather simple,” Tegmark said during his TED Talk. “Most AGI researchers expect AGI within decades, and if we just bumble into this unprepared, it will probably be the biggest mistake in human history. It could enable brutal global dictatorship with unprecedented inequality, surveillance, suffering and maybe even human extinction. But if we steer carefully, we could end up in a fantastic future where everybody’s better off—the poor are richer, the rich are richer, everybody’s healthy and free to live out their dreams.” Via Bultin

Fiscal Nepal |
Tuesday September 8, 2020, 04:31:27 PM |


Leave a Reply

Your email address will not be published. Required fields are marked *