top of page

The Consequences of AI in Everyday Life from a Sociological Lens

Hi friends!


Are you ready for one of those deeper sociological topics? I think they tend to do quite well on the blog and I think they scratch that sociological research itch I'll always have. AND I'll probably bring this up again, but AI is something I want to learn more about before I'm 30. I'm sure I'll get through the "30 things I want to do before I'm 30" eventually. We did just restock our alcohol stand thing so the cocktail is next on the list (I've actually made a Rotten Pumpkin and Sangria since starting this article). I think I've ticked quite a bit off the list so far so that makes me feel a bit better about it.


This blog discusses the consequences of AI (Artificial Intelligence) in everyday life from a sociological lens (or perspective).


Pinterest pin

First, what is AI?


AI (Artificial Intelligence) is (well according to Wikipedia), "in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals". Did that make your brain hurt? Yeah, me too.


Let's break it down (for us simple minded folks - especially me).


IBM describes AI as "technology that enables computers and machines to simulate human intelligence and problem-solving capibilities." So to me, that means AI is a type of computer software that is kinda of replicating what we do as humans so we don't have to think as much. I keep seeing lots of memes about people wanting AI to do their mundane tasks so they can write and make art not have the AI make their art and do their writing so they can do their mundane tasks like the dishes and laundry and honestly I vibe with that.


Some examples of AI (taken directly from Google) include:


  • Chatbots - a software application that mimics human conversations through voice/text interactions. One example of a chatbot is ChatGPT.

  • Digital assistants - digital assistants are advanced versions of chatbots, think about Siri, OK Google etc etc. where you speak commands to them and they can do things like schedule a flight or set an alarm.

  • Navigation - navigation apps such as Google Maps and Apple Maps use AI to direct us to where we're going. I never really thought about maps as AI before now. A more fancy way to describe this according to MDPI is "Navigation is the science and technology of accurately determining the position and velocity of an airborne, land, or marine vehicle relative to a known reference, wherein the planning and execution of the maneuvers necessary to move between desired locations are analyzed."

  • Robotics - I would say robotics and AI are two very different disciplines BUT they can overlap in some ways. Robotics is a branch of engineering and computer science where people build machines (like little robots) who can operate without human intervention (well not too much intervention). The reason these robots are built is so that they can carry out tasks that humans would get fatigued by (and make mass production really easy). I've already defined AI in this article so I won't bore you to death by explaining it again but AI in robotics is currently a minority but there are examples. Some examples include household products such as Amazon's Astro bot, robots in manufactoring, and even robots in healthcare known as "Waldo Surgeons". Yep, that's inspired by a Robert A. Heinlein sci fi short story called "Waldo".

  • Healthcare - see above (the healthcare surgeon). I really suggest reading "this article" if you want a deep dive on healthcare specifically.

  • Facial recognition - according to Sutherland and Global (2024), '''Face recognition uses AI algorithms and ML to detect human faces from the background. The algorithm typically searches for human eyes, followed by eyebrows, nose, mouth, nostrils, and iris. Once all the facial features are captured, additional validations using large datasets containing both positive and negative images confirm that it is a human face." This technology is used in places like airports, warehouses, workplaces with a clock in machine, and law trafficking etc etc. We will explore more below. Yep, this is mainly an excuse to talk about social media filters too hehe.

  • Autonomous vehicles (aka a self driving car) - are cars that operate without the control of a human behind the wheel. They use AI technology to determine where to go, when to stop, when to give way etc - basically they use the same decisions we would when driving.

  • Search engines - I kinda feel like this one is self-explanatory, but some examples are Gemini, Bing AI, and Yep.com.




Bro, that's a lot. I didn't think about AI that deeply until I sat down to write this post. Oh my golly gosh, we are in for a ride, aren't we?


Now we have a bit more context to what AI actually is, let's take a bit of a deeper dive into the consequences (or implications, whatever you wanna call it) of AI in everyday life.



Labour Markets + Employment


I feel like this one can be both good and bad (and I guess there are pros and cons to everything) but AI could increase labour productivity by automating mundane and routine tasks which frees up time for workers to develop other skills (this may increase the value of the workers) but it also could decrase employment opportunities.


What kind of jobs will AI create? According to the World Economic Forum, some jobs that could be created by AI are AI trainers - people who develop AI, explainers - people who help explain what AI is to the general public, and sustainers - people who use AI and make sure to continue using it in the best ways possible. Sidenote, I do love how the World Economic Forum discusses creative destruction - it makes my sociological brain happy. But they are exactly right, AI is literally creative destruction.


But what about the jobs that AI might destroy (or replace)? Well, according to Mearian (2024), roles that require repetitive tasks like data entry, legal admin, and mathmatical careers may be replaced with AI (or enhanced, depending how you look at it). And healthcare may be impacted as we already saw with one example, Waldo.


And with such innovation, it means that new skill requirements will need to be met so are we going to lose more or learn more?


I don't have a specific subsection for AI in healthcare but I just want to say a few things here if ya'll don't mind. According to Shaheen (2021) there are many applications of AI in healthcare such as AI for drug discovery - meaning that AI has helped pharmaceutical companies fast track their drug discovery process (Pfizer is using machine learning to help discover immuno-oncology treatments), AI is being used in clinical trials to help automate and speed up the process, and AI is being used for patient care - to analyse people's quality of life.



Social Relationships + Interactions


I kinda feel like AI is going to influence our social relationships and interactions. For example, we already see the use of filters in social media apps such as Snapchat and Instagram (I am sure there are others but I can't think of them right now). But a lot of filters are about changing the way we look - often to enhance our beauty or shape our face to fit with what's trending. So there's a whole debate there but I also want to point out there's also a whole range of silly filters that make us look "ugly" or not our best selves and I think that's important to point out because no one else ever seems to - they always focus on the negatives, especially when it comes to social media.


A lot of social media platforms have also integrated AI into their own algorithims (even Wix - my blog site uses AI inside the platform) to help make users have a "better" experience. Social media platforms use AI to enable personalised content recommendations, real time content analysis, and automated content generation (Mohamed et al., 2024). With this, comes enhancement for the user experience on social media but it could also lead to the spread of misinformation, filter bubbles, and echo chambers.


AI is also seen in things like customer relations where companies will use AI similar to the way they would use a customer service agent and by doing this, it saves them money but also helps with time and location barriers (aka it doesn't matter what time a person is calling or chatting with a company, the AI agent can help them). It also means that companies can be located in cheaper areas. If they employ "AI agents" instead of people, it may mean that such companies could generate higher revenue and turnovers each year. Whether or the not the user experience is as good as talking to human is a different story (Chaturvedi et al., 2023). Personally, I hate calling people up but I've also struggled with chatbots too so for me I'm on the fence with this one. I think I just dislike interacting with anyone or anything.


If you're like me, you've probably watched or read a lot of sci-fi content and I love it, but it does definitely scare me still, especially when it comes to robots. Westworld was such a good show, but it did scare me a lot. I do wish it got another season though as I feel like a few things were left tangled. Anyway, we're going off-topic but the reason I brought up sci-fi is because it makes me think of AI and companionship. Can AI really replace human companionship? Personally, I don't think so - there's just that extra special feeling when you create a human connection whether it's falling in love, making a new friend, developing friendships, or just spending time with loved ones. In saying all this, it's not to say that some people won't be affected by AI companionship tools. I personally think that some people may become addicted to them.


If we replace human interaction with AI, we are likely to get lesser quality and less satisfaction. It's just not the same. Quantified (2024) suggests that "That human-to-human communication is vital to humanity’s social life, and it should be nurtured and enhanced in any way possible" and "That data and artificial intelligence provide a powerful opportunity enhance personal, human-to-human interaction and have more winning conversations." They also go on to say that the human fear of AI is based on the concept that AI will make humans obsolete, which definitely won't be the case, but I do understand where the fear comes from.



Survelliance + Privacy


Is AI the new eyes of surveillance? Well, one would assume so considering the way it can collect, interpret, and analyse data and do so at rapid speeds. With this, comes concerns of safety, privacy, and data collection. Remember Cambridge Analytica? Thanks, Zuckerberg.


According to Pfau (2024), in a Forbes article, "It's widely understood that AI tools allow people to create content—texts, images, videos and much more can be quickly created with AI. But these tools can also be used to track and profile individuals. AI allows for more detailed profiling and tracking of individuals' activities, movements and behaviors than was ever possible before. AI-based surveillance technology can, for instance, be used for marketing purposes and targeted advertising." This quote is very scary as the use of AI in such a way can lead to invasion of privacy and it may make people feel uncomfortable. Just think about how many cameras you walk past when you walk down a street or how many things are actually tracking our every move. Are our phones listening to us? I wouldn't go so far as to say our phones have a bug in them, but the way algorithms and things like that work is on prediction, they know what we want before we know we want it (if that makes sense). But what's our data being used for? Facial recognition technology is widely used in public spaces like train stations and airports and probably lots of other places too. This leads to a lot of ethical concerns about the constant monitoring of people. Are we in a real life 1984? Is Person of Interest going to become a reality? You tell me.



Ethics


Ahh, ethics, ethics, ethics. An interesting topic. I don't really know how people decide what is ethical and what isn't. Like I know they have ethics committees and people decide together, but it is very interesting what sorts of experiments that they used to get away with. And don't even get me started on what the f*ck the CIA has gotten away with. Like how did MKUltra actually happen? Literal brainwashing dude. Anyway, I don't know if we will see another MKUltra in our lifetime but it wouldn't surprise me. I don't wanna say too much because I don't wanna feel like I'm being watched LOL. Anyway, back to the topic at hand!!


Is AI ready to make unsupervised decisions? Like moral decision-making? Simply put, no. Have you seen some of the whack images it generates? However, it can help with decision-making in general - we want AI to enhance our lives, not take over and make humans obsolete. According to McKendrick and Thurai (2022), AI makes the "right" decisions for the most part. What AI might struggle with is specific moral decision-making. McKendrick and Thurai (2022) discuss the "trolley problem" - where someone has to make a decision to sacrifice one person to save a larger number of people. Will AI be able to make a split-second decision? Will it make the right one? What's even scarier is the idea (that if AI can answer the trolley problem), it's able to think independently. Would AI make the same ethical and moral decisions that us people would? Is there AI error like there is human error?


Would AI be able to have empathy for others or even sympathy? I know, these are just more questions than answers but I feel like right now, we don't really know where AI stands.


There's also the issue of bias and fairness that comes up. For example, the idea of AI as a recruitment tool. Let's look at the example from Amazon, where they used an AI-based tool to essentially "out-recruit" other big tech companies. You might be shocked to learn what actually happened. The recruitment tool did not like women and yep, as a feminist and a woman myself, this is a BIG BIG problem. The data that this AI tool used was tainted because it was only really looking at men (it was designed to "vet applications by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry" (Dastin, 2018). In turn, this AI tool taught itself that men were preferable to hire. As Amazon were unable to make the problem gender-neutral, the AI tool development was scrapped. I feel like if AI were to teach itself this sort of behaviour, then we need to talk about hegemonic masculinity - i.e., the practice which reinforces male dominance in society. How do we get away from that if AI is just going to have gender biases? Why is it that men are still in charge? Aren't countries with a woman a their leader a lot better off anyway? Well, that's just my opinion. But like have you seen America right now? I'm sorry, but neither Biden nor Trump should be running. They need to be replaced with two better candidates and democracy needs to be democracy, not a representative democracy. Okay okay Ashy, you're going off topic again and starting to rant.


And what about accountability? If AI does the wrong thing, who takes accountability for it? Will AI be able to apologise or rectify a mistake? Who knows.



Economic Inequality


Something that must be discussed in sociology, is economics. If you've ever taken a 100-level sociology course, you'll know about the social, political, cultural, and economic factors. All of these work together and can play an important role in impacting something. Often, each factor may be correlated too.


So what about AI and economic inequality? Well, we all know that we live in a crazy capitalist world where the rich get richer and the poor get poorer (thanks, Marx). But there's this question that always stuck with me from my university days from one of my supervisors, Mike Grimshaw who always used to say "But, who has access?" And in hindsight, this is a really important question. Who has access to the technology of AI? Well, firstly, we need an internet connection - because not everyone has that, we need to know how to use a computer and be tech savvy enough to use things like ChatGPT. Those who don't have access might lose out. With all that being said, I also think AI will allow those who may not have had access to education before, finally get access to education. We also might see the rise of personalised learning - learning tailored to the way we want to learn or how we might learn best.


There is also the idea that with the rise of AI, income inequality will worsen. I don't have a better way of paraphrasing what Bell and Korenik (2023) say so I am just going to quote it: "We argue that this poses a grave threat to democracy that is separate from more traditional AI risks to democracy such as deep fakes and misinformation: High inequality corrodes democratic institutions through increased elite influence, corruption, populism, and greater public discontent. At the same time, weakened democracy loses power to reign in inequality through progressive policies.

This may create a vicious feedback loop of eroding democracy and rising inequality, which may accelerate rapidly following an economic shock like large-scale displacement of workers by AI. The result could be a new society-wide equilibrium with starkly increased income disparities and a weakened voice for ordinary citizens."


And if this comes into play, wealth distribution will be impacted and we will see a rise in economic inequality. Bell and Korenik (2023) suggest that to minimise the likelihood of the rise of economic inequality, policies should be put in place such as not allowing AI to automate all work, empower workers, reform tax policies, and make sure that there are no excessive power gains from the implementation of AI.



Social Norms


Social norms are the shared standard of behaviours that are "acceptable" in certain places, which can often be different in different scenarios. For example, there's a social norm of not talking to other people in a lift and if someone breaks a social norm, it can get quite uncomfortable. Or if someone faces the wrong way in a lift, it could be seen as breaking a social norm. But what about social norms and AI? Well, I personally think that AI is going to fundamentally change our social norms - especially in the Western world.


Baronchelli (2024) says "An outlook on how AI could influence the formation of future social norms emphasizes the importance for open societies to anchor their formal deliberation process in an open, inclusive and transparent public discourse." In my own interpretation of this quote, new social norms are likely to emerge from the use of AI but we don't yet know what they are going to be. It also just makes me think of things like Uncanny Valley - I don't know why. Are we going to lose agency with the rise of AI? How much of the world is it actually going to take over? Do we really need to be scared? And what's going to happen to our work-life balance? Are the days of the 9-5 over? Or are we still going to be overworked and underpaid? What happens to agency? Does AI have agency?


Ullrich and Diefenbach (2023) discuss social norms and AI and how we are an ever-increasing digitalised society. Things like social cues that we'd have with a person-to-person interaction will not be seen in the same way when interacting with an AI chatbot, they may not be able to pick up on subtle hints or cues like humans would. They go on to say that we may even see a decline in authenticity online when we see avatars. If a person is acting as an avatar on something like the metaverse, does that mean there are different social norms and expectations for that avatar? What happens if said avatar commits a crime? Is it treated the same way as crime IRL? What happens to trolls? Will we still be able to block them? So many thought-provoking questions!



In conclusion, this article has discussed the consequences (or implications) of AI on everyday life from a sociological lens. We have explored examples of AI, the effects of AI on labour markets and employment, AI and social relationships, AI and surveillance society, AI and ethics, AI and economic inequality, and AI and social norms. We can see that AI is another form of creative destruction but with the rise of AI comes way more questions than answers.


Please let me know in the comments what you think the future of AI holds.






Thanks for reading! Much love,

Ash xoxo


Some related articles I've written that you might want to check out are:



Some readings for you (if you want to learn more) - avaliable via Google Scholar


  • Baronchelli, A. (2024). Shaping new norms for AI. Philosophical Transactions of the Royal Society B, 379(1897), 20230028.

  • Chaturvedi, R., Verma, S., Das, R., & Dwivedi, Y. K. (2023). Social companionship with artificial intelligence: Recent trends and future avenues. Technological Forecasting and Social Change, 193, 122634.

  • Collins, J. W., Marcus, H. J., Ghazi, A., Sridhar, A., Hashimoto, D., Hager, G., ... & Stoyanov, D. (2022). Ethical implications of AI in robotic surgical training: a Delphi consensus statement. European urology focus, 8(2), 613-622.

  • Fontes, C., Hohma, E., Corrigan, C. C., & Lütge, C. (2022). AI-powered public surveillance systems: why we (might) need them and how we want them. Technology in Society, 71, 102137.

  • Mohamed, E. A. S., Osman, M. E. & Mohamed, B. A. (2024). The Impact of Artificial Intelligence on Social Media Content. Journal of Social Sciences, 20(1), 12-16. https://doi.org/10.3844/jssp.2024.12.16

  • Shaheen, M. Y. (2021). Applications of Artificial Intelligence (AI) in healthcare: A review. ScienceOpen Preprints.

  • Tolmeijer, S., Christen, M., Kandul, S., Kneer, M., & Bernstein, A. (2022, April). Capable but amoral? Comparing AI and human expert collaboration in ethical decision making. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (pp. 1-17).

  • Ullrich, D., & Diefenbach, S. (2023). Forecasting Transitions in Digital Society: From Social Norms to AI Applications. Engineering Proceedings, 39(1), 88.


コメント

5つ星のうち0と評価されています。
まだ評価がありません

評価を追加
My logo
bottom of page