Is AI Helping Us or Quietly Hurting Us?
Artificial Intelligence has become a crucial part of our daily lives. We ask Alexa to play music, use AI-based apps for therapy, and rely on algorithms for news. We trust AI with our secrets, emotions, and routines. But while we embrace this technology, we often overlook the warning signs. These signs include privacy violations, misinformation, bias, and manipulation.
It’s time to ask: are we genuinely moving forward with AI, or are we stepping into a more dangerous version of the past?
Betrayal of Trust and Privacy
Several real-world examples show how AI systems have broken public trust. Cambridge Analytica analyzed personal data from over 87 million Facebook users without consent to influence elections. Clearview AI collected more than 3 billion photos from platforms like Instagram, Facebook, and LinkedIn and sold facial recognition tools to private companies and law enforcement without user permission.
Pegasus spyware was used by governments to monitor journalists and human rights defenders through their phones. Alexa, a household voice assistant, was found to be recording conversations even when not in use. One of the most disturbing examples is BetterHelp, a mental health app. Users shared their deepest fears, from trauma to suicidal thoughts, yet their personal data was shared with platforms like Facebook and Snapchat for advertising purposes. The very tools designed to help us were exploiting our trust.
AI and the Spread of Misinformation
AI is also being used to create and spread false information. Just two days before Slovakia’s 2023 election, a fake audio clip of the president circulated online. Experts confirmed that AI manipulated the recording. According to the Freedom House 2024 Report, over 20 countries experienced election interference through AI-generated deepfakes between 2023 and 2024.
These tools blur the line between real and fake, confusing the public, influencing votes, and destabilizing democracies. In the wrong hands, AI can easily become a weapon against the truth.
The Bias Beneath the Code
AI systems don’t just make decisions; they also judge. When trained on biased data, their judgments reflect discrimination present in society.
The COMPAS tool used in U.S. courts to predict if a person might reoffend was found to be biased against Black defendants. In the Apple Card scandal of 2019, women received much lower credit limits than men, despite having similar financial profiles.
AI-based exam proctoring tools like Proctorio and Honorlock flagged Black and Brown students more frequently because facial recognition systems struggled to accurately detect their faces. These examples show that AI isn’t neutral; it mirrors and often worsens existing inequalities.
Conclusion: The Past in Disguise?
What’s most alarming is how AI is bringing us back to a time when privacy was optional, discrimination was common, and truth could be easily manipulated. But now, it’s happening under the guise of innovation.
If we don’t demand transparency, accountability, and strong ethical oversight, AI will continue to evolve without consequences. This could shape a future where our rights, dignity, and trust become the true cost of progress.
what are your views on artificial intelligence? stay tuned as my next blog talks about how we can use this intelligence for the betterment of beings and control such issues.
Don’t forget to like and share my blog.
– Khevna Sharma