Artificial intelligence once seemed a science fiction concept, hardly grasped by people who were out of the once-niche area of computer science. Many forms of media explored the seemingly general idea of an A.I., and for decades, computer scientists and tech enthusiasts have been obsessed with the possibility.
“2001: A Space Odyssey,” Stanley Kubrick’s science fiction opus, which explored the concept of a rogue A.I. named HAL who abused the system that gave it control over the lives of human astronauts, is a cautionary tale of giving A.I. full control. The antagonist of the film, HAL, even goes so far as to commit murder because it loses understanding of what it means to protect humans.
James Cameron’s “The Terminator” (1984) both shocked and entertained the world with a surreal vision of a world taken over by machines. Several decades later, the Will Smith film, “I, Robot,” explored the concept of A.I. implemented into humanoid robots that could move and think freely.
“Fallout,” a popular video game franchise, is known for its stretched ideas of A.I. and the development of personal identity and self-actualization.
When these stories were created, they had only a creative idea — a vision — of what A.I. could turn out to be.
But the one thing that remained constant throughout all of these was the message: AI is dangerous.
Now, in 2026, the future is possessed by these films; they have been expelled from the movie screen and into the real world.
The world can finally touch, interact, use, see, and witness what was once deemed a general idea. More than this, much more than this, now the world can understand it; can see its uses, and its uselessness. The world can view how A.I. will ultimately shape our world drastically from how we’ve known it for centuries; it’s just a matter of time.
A Short History of Artificial Intelligence:
The term artificial intelligence represents a technology that mimics the human mind and its ability to learn. Human intelligence is incredibly unique, one in a million, and we’re known as the smartest creatures on Earth, with an innate ability to learn and apply at relatively fast speeds.
For years, scientists have tried to recreate our minds artificially with the goal of advancing our knowledge through this technology.
Prolific computer technology scientist Alan Turing wrote a paper in October 1950, which delved into the possibilities of computers that can learn automatically and have their own intelligence. He famously created what we now know as the “Turing Test”, which measures a machine’s ability to exhibit intelligent behavior equivalent to a human.
The official coining of Artificial Intelligence (A.I.) was born in a project at Dartmouth University, run and organized by John McCarthy in 1956. They explored the general idea of A.I. using probability of events and logical reasoning, and they organized projects that are credited with creating a foundation for the technology.
From the 1950s to the 1970s, studies and projects on the topic of A.I. became commonplace amongst many computer scientists who worked on more “simple” prospects under the topic of A.I. It wasn’t until recently that more advanced forms of A.I. have been developed, in an image that aligns with science fiction, or what past scientists envisioned A.I. to be and look like. These A.I. resources are now open to the public and have applications for almost everything that we use today.
How A.I. implements itself into modern society:
Since the dawn of humanity, we have utilized tools to make our lives easier. It’s a characteristic that’s allowed us to evolve and surpass the other creatures of this planet.
When we needed to hunt animals, we found the rock.
When we needed to travel far, we found the wheel.
When we needed to communicate, we found the telephone.
This is who we are; this is who we’ve always been.
But what happens when the next rock we find is capable of destroying us?
The concept of artificial intelligence speaks volumes about natural human curiosity. Throughout history, we have been known to explore realms of science and technology that are entirely fascinating and most times, dangerous, and a complete mystery.
A famous quote from the critically acclaimed film Jurassic Park (1993), Dr. Ian Malcolm warned against rushing too quickly towards technological advancements when he said, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
The scientists in Jurassic Park were still in the early stages of creating their dinosaurs. Through their success, dinosaurs were brought back to life, but the impacts were a net negative. They had failed to realize the dangers of their project, and after the dinosaurs broke loose, many innocent lives were taken. But what was done was done. And there was no going back because they were too preoccupied with doing something scientifically fascinating to weigh the pros and cons of it all. Much like the creators of A.I. today, who, no doubt intelligent, seemingly neglected to think about the risks. And what’s done is indeed done.
A.I. has now implemented itself into every little corner of human life. From monitoring systems, cameras, search engines, editing software, content creation, education, coding, etc. These are only a few of the examples, not to mention how many online tools have developed their own A.I. assistance tools, essentially ruining the authenticity of these sites. It feels like I can no longer use an online tool without simultaneously using and feeding A.I. without knowing or wanting to.
Modern A.I. systems:
OpenAI was founded in 2015 by Sam Altman, Elon Musk (Owner of Tesla), Ilya Sutskever, and Greg Brockman as a non-profit.
In 2019, OpenAI became a profitable organization run by the founders.
The structure works like the following: The founders own OpenAI, which fully controls OpenAI Global LLC, which manages the shareholders, employees, and profit (which is limited on how much they can make).
From 2020 forward, OpenAI is responsible for creating ChatGPT, which is among the most popular tools, and other technologies that allow for generating artwork, researching, and more.
ChatGPT has already started to transform the world. Students across the world access it, for better or worse, to either help with their assignments or have it do it for them entirely. Professionals have started using it in their work. Artists are already fighting a war to keep art human.
And even more recently, multiple A.I. tools have come forward like Grok, Instagram’s Meta, Google’s Gemini, and Sora.
The spread of A.I. is akin to a wildfire; it has spread across multiple aspects of human life, the flames fanned by corporations, enthusiasts, millionaires, and shareholders that support it. The world is seeing A.I. in many areas of work, education, online spaces, media, Art, and more. A reality is being formed where this technology poses a very real threat to various ways of life. We’ve come to a point at which A.I. could — and is — taking jobs. Programming is a job where, not even 15 years ago, people thought they had job security. Now, you’d be hard-pressed to find a single coding job where A.I. doesn’t have a majority effort.
A.I. has started taking entry-level jobs in the coding field. In a study from SignalFire, a company that has multiple types of coding career paths, the number of entry-level positions being filled has declined while the number of higher-level jobs has remained the same. This is the result of both take-over of A.I. in coding and tech jobs and the fluctuating job market for coding. (Mashable).
Thankfully, we aren’t at a point where A.I. is taking over jobs with lightning speed. The threat is still real, nonetheless, and when understanding how A.I. works now, it’s far too easy to see how it could work in other jobs.
How available A.I. models work:
Computers can’t do anything more than the limitations we give them. Current A.I. models are actually called Learning Language Models (LLNs). This means that they take pre-existing sources and data from real humans and give us results. Essentially, scanning the online space.
This results in a system that doesn’t actually further human knowledge. As of right now, with the publicly available LLNs, it doesn’t do more than we can already do. We’ve been researching on our own for decades, we still do, and ChatGPT does the same thing, just less accurately.
This is a major flaw in current models. Everything that Humans have already done, invented, written, researched, and created that is somewhere online is being constantly absorbed by A.I.
This ties into the next issue: Generative A.I.
A.I. used to generate art, videos, text, stories, etc. This is a ship ready to sink, and not in the failure type of way, but in a controversial way. Generative A.I. has already been used to make ultra-realistic images and videos, and it’s proven to have evolved quickly. In prior years, it was relatively easy to spot A.I. But now, it’s more challenging than ever. At a simple glance, it can look so real, so genuine. And the sky is the limit with the capabilities of video and image A.I.
Right now, one could use these tools to generate videos of anyone and anything if there is enough information.
These models don’t actually think; they just regurgitate. It’s not truly sentient with its own thoughts. They simply learn incredibly fast.
We have established that LLNs or A.I. have a wide array of applications and uses. The potential risks of allowing them to flow freely in our multiple facets of life are still to be further explored.
Learning requires some struggle; when you’re cramming for tests and studying, you are struggling to find and remember the answers. People have become victims of instant gratification and are uncomfortable with the “not knowing.” A.I. is devaluing the journey of finding out, of what it takes to know. As a modern society, it’s fair to say that we have lost the art of the academic, and more and more, people are losing the ability to be content with not knowing something or not having something accomplished yet.
A.I. poses a massive threat to younger generations of students and learners in general. Generation Alpha and above will ultimately grow up without having to struggle or test their minds to learn something new. And this is entirely different from the Internet or the World Wide Web. A.I. will eliminate the researching process, the writing process, and the creative process. A day of reckoning for the world when the minds of billions of people become less and less acute, creative, imaginative, or problem-solving. People in power already want the masses to be uneducated, and the unique minds of the people are in jeopardy. This is not about convenience; it is about the destruction of thinking.