Skip to content

AI Is Everywhere

AI Is Everywhere

Technology Sector

But is it right for your business lab or classroom?

Between the time most people wake up and go to bed, they’ll use artificial intelligence half a dozen times, often without even knowing it. It probably made life either easier in some small way, or more complicated and cluttered, depending on perspective. Whether it’s facial recognition to open a smartphone, a relentless stream of curated content on social feeds, a digital voice assistant for a quick weather update or a navigational app to find a lunch date, AI permeates all aspects of life.

According to a recent survey by software maker Pegasystems, 84 percent of the company’s customers use at least one AI-powered device or service, but only a third of them are aware that they’re using AI. In other words, AI is embedded into day-to-day life whether we realize it or not.

However, when Open Source AI unleashed ChatGPT in November 2022, it felt like the dawn of a new era. Discussions and debates lit up around the dinner table about AI and machine learning—algorithms that teach computers to think like humans. Everybody, it seemed, was suddenly talking about this chatbot and its uncanny ability to quickly generate a poem customized for a friend’s birthday, churn out a 1,000-word paper on electric vehicles for a Grade 10 student or craft a legal argument in a fraction of the time it would take an articling lawyer.

Or, as the team at Victoria’s Whistle Buoy Brewery demonstrated last year, how you can lean on ChatGPT to craft a recipe for a bestselling hazy pale ale, which they dubbed Robo Beer, and also write the marketing copy and help design artwork for the brew.

Two months after its launch, ChatGPT had become the fastest-growing consumer software app in history with more than 100 million users. Its meteoric rise fuelled OpenAI’s current valuation of US $80 billion. Similarly, the stock value of semiconductor maker Nvidia Corporation surged past US $1 trillion last year, driven by the burgeoning AI market.

Yet, many questions remain unanswered about AI’s impact. Will it relieve workers of boring, rote tasks, liberating them for more creative endeavors, or just put people out of work? Is AI reliable? Failures with Tesla’s autopilot technology are linked to nearly 500 accidents, including 13 fatalities. It forced Elon Musk’s automaker to recall two million vehicles last year and proved AI’s application to autonomous driving still needs refinement.

Can we trust it? AI is able to rapidly scour the internet for vast amounts of data, some of it private and without consent, allowing an individual’s movements and behaviors to be tracked like never before. Not only do AI’s surveillance capabilities pose an Orwellian threat to privacy, but its deployment in the defense sector is also extremely troubling from an ethical standpoint.

It’s why University of Montreal computer scientist Yoshua Bengio, world-renowned for pioneering work on artificial neural networks and deep learning, spends as much time championing AI as he does raising red flags. Named recently to Time magazine’s 2024 list of the 100 most influential people, Bengio notably led discussions about AI’s potential for negative social impact while helping to shape the 2017 Montreal Declaration on Responsible AI. The voluntary declaration’s noble aim is “to put AI development at the service of the well-being of all people.”

Against the backdrop of this rapidly developing AI world, Douglas sat down with some Victoria thinkers, innovators, and leaders to hear how they have embraced and accommodated AI into their workflows.

AI in the Boardroom

What’s in a word? Potentially a lot. But can you tell the difference between words crafted by a human and those generated by AI? If you’ve noticed lately that superlative-laden real estate listing descriptions are making all Realtors sound like poets, you’re not alone.

Deirdre Campbell, president of tartanbond, works in the world of communications, marketing, and public relations largely for clients in the tourism sector. Authenticity, tone, and voice are everything. As powerful as AI is, it makes mistakes, and that can be disastrous for a media campaign. Artificial intelligence is only as good as the digital data it mines; if that data is biased, wrong, inaccurate, or lacks diversity, AI will churn out dubious results and predictions.

Campbell and her team are embracing AI as another tool, albeit with caveats. For example, in meetings, AI can be the note-taker and help identify “next steps or ‘to dos,’” enabling everyone to focus on the meeting without someone having to be the note-taker, says Campbell.

Victoria’s Whistle Buoy Brewery demonstrated how you can lean on ChatGPT to craft a recipe for a bestselling hazy pale ale, which they dubbed Robo Beer, and also write the marketing copy and help design artwork for the brew. Using artificial intelligence, it took only a few minutes for the Douglas team to create this robo-beer-making image.

“It also has a lot of value for research. AI can pull out information from a large chunk of material that we have on file, and that can save a lot of time and money,” she says. “But you have to be aware of its limitations. From a tourism perspective, the biggest concern is AI-generated photography versus the real thing. A picture is worth a thousand words, unless that picture is AI-generated.”

In 2023, a team of University of California, Berkeley, and Stanford University researchers conducted a study that poked holes in the fundamental assumption of AI: that machines can learn like humans and get better over time. According to their results, ChatGPT actually got dumber and less accurate over time when it was asked to do certain tasks. For example, the chatbot’s accuracy rate on solving a certain math problem dropped more than 30 percent over a three-month period.

“At the end of the day, people want to do business with people,” Campbell says. “If someone feels like you’re being talked to by a machine, that’s a big ‘no.’”

AI in the Lab

Scientists and tech workers are leveraging AI to find solutions, make discoveries, and explore worlds that not long ago were beyond the reach of human bandwidth. AI’s ability to quickly scan, analyze, and draw inferences from huge data sets is opening up exciting new avenues of applied science.

Douglas magazine’s 10 To Watch winner Ocean AID is harnessing AI and machine learning to help tackle ocean pollution. Andrew Polanyi, a University of Victoria computer science graduate, has had a longstanding interest in ocean health. Two years ago, his Ocean AID co-founder Archit Kumar suggested applying AI to the environmental challenge of ocean plastic. After some business strategizing, they shifted to focus on ghost fishing gear—lost and abandoned fishing gear that floats around, entangling and endangering marine mammals.

Current methods use sonar to locate, track, and retrieve underwater objects like fishing nets. It’s slow and expensive, requiring a human to either monitor the system in real-time or analyze the data afterward. Either way, it’s tedious and time-consuming.

By marrying AI with sonar technology, Ocean AID is a potential game-changer. “AI can automatically detect objects based on machine learning from thousands of known data points,” Polanyi says. “We want to provide real-time, onboard detection that’s faster and cheaper.”

Polanyi and Kumar are in the process of leveraging a round of angel and family investment to grow the business. So far, the young entrepreneurs have worked with Coastal Restoration Society, the BC Shellfish Growers Association, and many of the 100 or so organizations engaged in ghost fishing gear recovery. Recently, Ocean AID completed a project with Bear River First Nation in Nova Scotia, using its AI-driven platform and data pipeline to locate and map more than 1,000 lost lobster traps.

Another UVic academic is literally making waves with AI-assisted research. Oceanographer Johannes Gemmrich recently co-published a paper with collaborators from the University of Copenhagen demonstrating how AI can help predict monster rogue waves. The researchers used AI to examine data from more than a billion waves, then the algorithm produced an equation. To their surprise, it contradicted prevailing wisdom about rogue waves that says they form when one wave slowly extracts energy from another wave to become one giant wave.

Instead, thanks to AI and machine learning, Gemmrich and his colleagues learned that rogue waves are caused by something else called linear superposition when two or more wave systems briefly align to create massive waves. According to the research team, they now have hard data to back what mariners have known anecdotally for centuries, giving the shipping industry a tool for avoiding dangerous waves that put crew and cargo at risk.

Researchers at the University of British Columbia have also developed an AI model that can analyze an oncologist’s notes and flag whether or not a cancer patient will need mental health support during their treatment. Instead of a human poring over a doctor’s notes looking for telltale words and subtle nuance, AI can make predictions with better than 70 percent accuracy, according to the UBC study.

“AI is blowing up everywhere. Pretty much any mundane or challenging task that can be done by AI, will be done by AI,” says Ocean AID’s Polanyi.

AI in the Classroom

Some historical perspective is helpful to fully comprehend the monumental task facing schools and educators in this brave new artificial intelligence world. In 2007, Apple launched its flagship iPhone, 13 years after IBM’s prototypical Simon Personal Communicator hit the market.

Yet three decades later, our education system is still struggling to develop policy around smartphone use in the classroom. Should there be system-wide bans, or should it be left to the discretion of individual teachers, schools, or districts?

In a media release last year, provincial Minister of Education and Child Care Rachna Singh said, “The ministry is continuing to research AI and how best to support schools and teachers as they navigate this topic.”

It’s a tall order. Chances are today’s students, who have grown up in the age of smartphones, are already more fluent in ChatGPT than many of their teachers ever will be.

“The idea of blocking out AI is not an option,” says Michael Paskevicius, assistant professor in educational technology at UVic. “I don’t ban AI from my classrooms. It’s a tool that I encourage my students to explore, understand and use.”

Paskevicius sees time-saving potential for teachers leveraging AI to develop lesson plans or tailoring a lesson to meet individual student needs. Software companies are flooding the market with AI-driven platforms aimed at students, teachers, parents, and schools. For example, Classcraft, according to its maker, is a game-based classroom management tool that uses AI to recognize patterns in student learning while providing real-time suggestions to teachers. Happy Numbers provides individualized math education while giving teachers insights about student growth and learning. Just how many teachers will have the bandwidth to adopt and scale these AI tools in the classroom is an open question.

There has been plenty of hand-wringing about kids using AI to “cheat” on papers and projects, but Paskevicius believes existing policy around academic integrity suffices. “It’s not to say that we won’t have to revisit policy, but the same rules apply. Do your own work and check your sources even if it is AI,” he says. “AI doesn’t provide citations, it just gives you a block of text.”

Data privacy is always a concern, but especially in a school setting involving minors. Consider the fact that any information a child inputs to an AI-driven educational tool automatically becomes part of that algorithm’s generative machine learning process. Is that information secure, and who’s watching? Those are big questions.

As is often the case, regulations struggle to keep pace with the increasing sophistication of AI. In April 2024, a group of American newspapers including the Chicago Tribune and New York Daily News launched a lawsuit against Microsoft and OpenAI for using their articles to train their AI systems. It likely won’t be the last time a charge of copyright infringement is leveled against AI tech.

Adam Dubé, associate professor in McGill University’s Department of Educational and Counseling Psychology, said educators and administrators are being “bombarded with endless advice on how to make our classrooms and institutions GenAI ready.”

“Advice ranges from the Pollyannaish rapid adoption of the latest GenAI tools to scaremongering, ineffective bans of all things algorithmic. The future of GenAI in education is neither so promising nor so bleak,” Dubé said in an interview for McGill.ca (“Experts: The impact of AI on education”).

According to Dubé, what’s needed is a “middle path” approach, one that recognizes both the benefits and risks of AI. Just what that middle path will look like is anybody’s guess.

Additional Info

Media Contact : Andrew Findlay

Source : Douglas Magazine

Powered By GrowthZone
Scroll To Top