By Rutba Riyaz
Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world’s inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats. As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today?
Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018.
The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities. They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.
Many focused their optimistic remarks on health care and the many possible applications of AI in diagnosing and treating patients or helping senior citizens live fuller and healthier lives. They were also enthusiastic about AI’s role in contributing to broad public-health programs built around massive amounts of data that may be captured in the coming years about everything from personal genomes to nutrition. Additionally, a number of these experts predicted that AI would abet long-anticipated changes in formal and informal education systems.
A risk that experts cite when talking about the risks of AI is the possibility that something that uses AI will be programmed to do something devastating. The best example of this is the idea of “autonomous weapons” which can be programmed to kill humans in war.
Many countries have already banned autonomous weapons in war, but there are other ways AI could be programmed to harm humans. Experts worry that as AI evolves, it may be used for nefarious purposes and harm humanity.
Another concern, somewhat related to the last, is that AI will be given a beneficial goal, but will develop destructive behaviors as it attempts to accomplish that goal. An example of this could be an AI system tasked with something beneficial, such as helping to rebuild an endangered marine creature’s ecosystem. But in doing so, it may decide that other parts of the ecosystem are unimportant and destroy their habitats. And it could also view human intervention to fix or prevent this as a threat to its goal.
Not that many years ago, the idea of superhuman AI seemed fanciful. But with recent developments in the field of AI, researchers now believe it may happen within the next few decades, though they don’t know exactly when. With these rapid advancements, it becomes even more important that the safety and regulation of AI be researched and discussed at the national and international levels.
In 2015, many leading technology experts (including Stephen Hawking, Elon Musk, and Steve Wozniak) signed an open letter on AI that called for research on the societal impacts of AI. Some of the concerns raised in the letter cover things like the ethics of autonomous weapons being used in war, and safety concerns around autonomous vehicles. In the longer term, the letter posits that unless care is taken, humans can easily lose control of AI and its goals and methods.
The importance of AI safety is to keep humans safe and to ensure that proper regulations are in place to ensure that AI acts as it should. These issues may not seem immediate, but addressing them now can prevent much worse outcomes in the future.
Making sure that AI is fully and completely aligned to human goals is surprisingly difficult and takes careful programming. AI with ambiguous and ambitious goals are worrisome, as we don’t know what path it might decide to take to its given goal.
Yet, most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.
- The author is an intern at Kashmir Observer
Follow this link to join our WhatsApp group: Join Now
Be Part of Quality Journalism |
Quality journalism takes a lot of time, money and hard work to produce and despite all the hardships we still do it. Our reporters and editors are working overtime in Kashmir and beyond to cover what you care about, break big stories, and expose injustices that can change lives. Today more people are reading Kashmir Observer than ever, but only a handful are paying while advertising revenues are falling fast. |
ACT NOW |
MONTHLY | Rs 100 | |
YEARLY | Rs 1000 | |
LIFETIME | Rs 10000 | |