Dr. Leif Nelson, Executive Director, Learning Technology Solutions, Boise State University
Dr. Leif Nelson, Executive Director, Learning Technology Solutions, Boise State University
Artificial neural networks—the computational models behind modern large language models (LLMs) such as ChatGPT—are not new. They were first used in the 1950s to reduce echo and other background noise in telephone calls. Around the same time, a group of academics coined the term “artificial intelligence” to describe the self-learning, pattern-recognition algorithms they were working on (and acknowledged their inspiration from neurobiological structures). Nearly 70 years later, in the current “Summer of Artificial Intelligence,” there is a lot of noise as LLM dominates the headlines, captures public attention, and is widely used. Today’s artificial intelligence is upending assumptions about the value and nature of work, education, and human creativity. Specifically, ChatGPT and its ilk (Bing, Bard, etc.) have been raising concerns about writing practices and students’ tendency to cheat, hallucinations (or “fiction”) and fake news, professional effectiveness and industry disruption, usefulness and harm, and Countless predictions of how this technology will indelibly change the future of humanity range from exaggerated optimism to cataclysmic pessimism.
Although artificial intelligence is at the center of so much hype and hysteria, it is not a problem in itself; it is a problem. Rather, many of the problems with generating AI in the current generation are by-products of the broader social conditions in which these AI tools exist. From an economic perspective, the attention and social media industries have extracted, exploited, and regurgitated all human content and data. Likewise, current generative AI tools are “trained” on existing technologies, but these models face new challenges when “black box” issues confuse issues of ownership and attribution. From an environmental perspective, data may indeed be the “new oil”, as the cloud computing industry has recently surpassed the aviation industry in carbon emissions, and the computing power required to run the LLM has undoubtedly exacerbated the problem. In education, the shift to online and digital environments has brought an increased focus on performance and completion metrics, creating an atmosphere in which students may be inclined or even encouraged to use tools that enhance their output. tools of speed and quality, despite the fact that doing so may shorten some of the more difficult, tedious work in favor of “good” learning.
From an economic perspective, the attention and social media industries have extracted, exploited, and regurgitated all possible human content and data
Some of the world’s largest companies were quick to jump on the AI bandwagon and release these tools with unprecedented speed and recklessness. From its almost anarchic birth in the 1990s to the democratizing “Web 2.0” of the mid-2000s, the web was once seen as a virtual, global “public square,” and as such it has increasingly become a business for influencers and followers. market and their preferences and opinions, all based on algorithmic ad sales on a handful of giant platforms such as Google and Facebook. The (unsurprising) lesson from this evolution is that those with the most capital will do whatever it takes to scale and remain dominant. Ethical violations are downplayed and sidestepped. In congressional hearings and lawsuits, the leaders of these big tech companies have taken a coy stance, saying that their products are “just platforms” and that they are not responsible for their use and abuse (even if they are). Almost anticipating the future need for reasonable deniability, contemporary AI leaders such as Sam Altman have warned of the dangers of their products.
Many claim that the age of artificial intelligence is inevitable and that the best (if not the only) course of action is to understand how to use these tools well and responsibly. This view may be fatalistic, but it is also realistic and desirable. The hundreds of millions of people using a particular tool or platform may on one hand be seen as a glorification to some tech executives, but it’s also a powerful reminder that each of us has a responsibility for our personal habits and practices when using these tools. Here are five recommendations related to the use of artificial intelligence that consider the relationship between individual activities and wider social impacts.
1. Use artificial intelligence wisely to save energy, protect sensitive data, and ensure that the interactions these systems receive are of good quality.
2. Artificial intelligence is a tool. Sometimes it is necessary to complete certain tasks with or without the use of certain tools. Reliance on tools can become a crutch. Terms like “handmade” or “made from scratch” imply characteristics of quality, care and health. Perhaps writing “from scratch” (i.e. without AI assistance) will have a similar meaning, as varying degrees of “assisted” writing will increasingly become the norm. Spend more time writing by hand. Literally. Use pen/pencil and paper. The specific act of handwriting will feel different, even more real than typing in a word processing application.
3. Teachers and researchers should move away from the false dichotomy of having to “embrace or ignore” AI and instead consider how AI may impact their fields and disciplines in the coming years. They should engage in discussions and reflective practice on the topic with peers and students.
4. The risk of AI tools spreading misinformation, bias and forms of social manipulation (whether intentional or unintentional) is high. Critical thinking and information literacy skills will be more important than ever.
5. The real promise of technology and automation should be to reduce the need for humans to perform routine monotonous tasks. Use artificial intelligence to complete these types of tasks, and spend extra time doing things that promote relationships or creativity.
At a macro level, governments and other institutions have the responsibility to develop overall policies and guidelines for the AI industry. These efforts take time and involve a degree of vulnerability as technology evolves. In the meantime, everyone should educate themselves on artificial intelligence and its potential promises and shortcomings. In its crudest sense, progress is the accumulation of individual decisions and actions. When these decisions and actions are made with full knowledge of the situation and consideration of potential consequences, everyone benefits.
3 Comments
Pingback: Artificial intelligence is everyone’s responsibility – Tech Empire Solutions
Pingback: Artificial intelligence is everyone’s responsibility – Paxton Willson
Pingback: Artificial intelligence is everyone’s responsibility – Mary Ashley