in ,

Narain Batra: Can artificial intelligence predict another Jan. 6?


This commentary was written by Narain Batra of Hartford, who is the author of several books, including “The First Freedoms and America’s Culture of Innovation,” and “India In A New Key: Nehru To Modi.” 

Artificial intelligence is opening a wonderful world of immense possibilities. Such prospects include, for example, helping save the Amazon by forecasting deforestation; automation and job creation through reskilling; mitigating and managing climate change by measuring emissions; boosting the discovery of new drugs; fighting terrorism and transforming national security; and improving the criminal justice system and cutting crime rates.

AI based autonomous vehicles — cars, trucks, buses and drone delivery systems are already impacting our lives. By using AI, metropolitan areas could be transformed into smart cities for service delivery, environment planning, power utilization, handling emergencies and much more.

These are some of the known and knowable problems that the applications of AI algorithms can solve with greater efficiency. Can AI foresee the unthinkable, what former defense secretary Donald Rumsfeld called as the unknown unknowns?

Earlier this year, The Washington Post reported that after the horrendous attack on the Capitol on Jan. 6, data scientists working at the University of Central Florida’s AI program CoupCast, began to focus on “unrest prediction.” They are confident that Artificial Intelligence algorithms could be applied to predict political violence in America.

So far, the report said, CoupCast has been focused on electoral violence and coups in the developing world. The United States with its long democratic historical traditions seemed far away from such threats but the Jan. 6 assault questioned that sense of American exceptionalism. American democracy seems fragile.

The CoupCast experts believe that by “designing an AI model that can quantify variables — a country’s democratic history, democratic ‘backsliding,’ economic swings, ‘social-trust’ levels, transportation disruptions, weather volatility and others — the art of predicting political violence can be more scientific than ever.”

Machine-based learning AI models can handle massive amounts of social, political and economic data that could issue forewarnings about emerging political threats, the data scientists said. The building blocks of political violence are now well-known for a populist leader to use them to arouse a mob.

Besides CoupCast, there are several other groups that have been using AI and mixed-method approaches to study and forecast crises around the world; and now they are focusing their attention domestically. As the Post reported, the Pentagon, CIA and State Department have also been moving in this direction using AI to predict geopolitical risks, especially with China. For example, the Global Information Dominance Experiment uses AI “trained on past global conflicts” to predict where new ones might happen.

AI has the potential for not only early warnings (coming events cast their shadows before) but more importantly for early awareness of events that have no past history, the unknown unknowns, the seemingly unknowable. 

Technological innovations mutate and creep into other areas. A new world of sensate surroundings in which nothing would remain incommunicado is arising.

Based on converging sensor and intelligent technologies, law enforcement and anti-terrorism experts are dealing with terrorism, among other problems, in altogether different ways and perhaps more effectively. The inside of the airplanes of the future would be embedded with sensors that record and transmit any unusual activity to a monitor and control center for pre-emptive action.

Scientists at QinetiQ, a commercial offshoot of the UK’s Ministry of Defense, have developed a working model of a sensor-embedded airplane seat that’s capable of capturing signals of physiological changes in a passenger and transmitting the information to a cockpit monitor. The signals could enable the crew to analyze whether the person is a terrorist or someone who is suffering from deep vein thrombosis, for example.

The smart seat would eventually be able to register signs of any emotional stress a passenger feels during the flight. Hidden seat sensors would provide unobtrusive in-flight surveillance and have the potential for actionable intelligence about the activities including the health status of in-flight passengers. More importantly, the information would enable plain-clothed air marshals to take preventive action in case there is a danger of terrorists contemplating blowing up or hijacking the plane. The cockpit would become an anti-terror cell.

Technologies seldom stand alone in this age of digital networking. They have a recombinant potential and tend to converge and splice with others to form newer technologies, which could be used in ways the original inventors never imagined. For example, if you combine QinetiQ’s smart seat technology with “sympathetic haptics” technology developed a few years ago at the Virtual Reality Laboratory at the University at Buffalo, New York, you could see how feelings of stress could be precisely transmitted and assessed.

If a bomber fidgets or a person is having a heart attack, the physical movements that accompany the stress and distress would be transmitted to the cockpit monitor and also to the homeland security monitors. The two convergent technologies would turn an airplane seat into a virtual-reality surveillance system that would silently record every physical motion of the occupant for instant analysis.

Radicalization of American politics that led to the Jan. 6 Capitol assault was driven by many complex socio-political factors. But it was aided by online mobilization tactics like tweets, memes and viral content to spread disinformation and promote extremist ideology. The challenge is whether artificial intelligence can fight disinformation and conspiracy theories before they lead to real-life catastrophic actions.

Did you know VTDigger is a nonprofit?

Our journalism is made possible by member donations. If you value what we do, please contribute and help keep this vital resource accessible to all.

Filed under:

Commentary

Tags: artificial intelligence, Jan. 6, Jan. 6 insurrection, January 6, Narain Batra

Commentary

About Commentaries

VTDigger.org publishes 12 to 18 commentaries a week from a broad range of community sources. All commentaries must include the author’s first and last name, town of residence and a brief biography, including affiliations with political parties, lobbying or special interest groups. Authors are limited to one commentary published per month from February through May; the rest of the year, the limit is two per month, space permitting. The minimum length is 400 words, and the maximum is 850 words. We require commenters to cite sources for quotations and on a case-by-case basis we ask writers to back up assertions. We do not have the resources to fact check commentaries and reserve the right to reject opinions for matters of taste and inaccuracy. We do not publish commentaries that are endorsements of political candidates. Commentaries are voices from the community and do not represent VTDigger in any way. Please send your commentary to Tom Kearney, commentary@vtdigger.org.