Sunday, May 7, 2023

My personal stand on using AI tools like ChatGPT and GitHub Copilot

I have been dabbling with GitHub Copilot and ChatGPT for a couple of months now and have been blown away by how easy it makes both my professional and personal life. Using Copilot, I was able to make great progress on learning a new test automation tool in a language I had little experience in. With ChatGPT, I have made improvements to many areas of my daily existence. But after some time, like most of us who have come across the latest generation of AI tools, you stop and ask yourself how ethical it is to keep using them?

A front-end engineer in the 90s worked with HTML and maybe CSS but if you only knew those technologies now you wouldn’t be able to find work. Most testers in the early 90’s or the early 00 could have gone on without knowing how to automate test cases, but that’s not the case now. So it’s not out of the ordinary for us engineers to see job descriptions change, but the magnitude of the social change the latest generation of AI tools can bring about is so great that it can radically reshape society for the worse, because in my opinion the technologies and the governance of these technologies are not yet in place.

According to the 5 criteria outlined in this[1] IBM’s short video on the topic a user should check to see if an AI’s responses represent,
  1. Fairness in representing all groups in societies around the world.
  2. Its explainability in how it arrived at a given response.
  3. Its robustness in ensuring fairness.
  4. Its data privacy
Taking ChatGPT as an example, my main concern is how an average user can be sure of point 1 and point 3! The training data set of ChatGPT is proprietary, and although there may be papers published on this topic, they are not accessible for the average person. Plus it is my personal opinion that there is far too much bias and noise in some of the likely sources (like internet articles and even research papers) so going by the seemingly overnight adoption, if there’re biases and noise in its training data, they get magnified 100 fold in society faster than we all ran out and bought ourselves mobile phones.

Apart from any damage to society AI poses because of any unfairness of its responses, It seems like there isn’t a clerical job AI can’t do better than the majority of us can (if a few of us are really hellbent on getting the AI through the hurdles like switching between tasks and processes).

Personally, I am of the opinion that until governments and experts come up with regulation and means to control any global societal damage, we the users must exercise restraint and become responsible. How I tackled this problem is by thinking through Kantian ethics and coming up with some conclusions (which I am planning to stick to for the foreseeable future). 

That I should,

  • Not use code augmentation tools like GitHub Copilot when I am working with technology I am comfortable with at work and other means of information retrieval is available.
  • Use code augmentation tools when I am learning something completely new (like a new programming language). But the fine print being it shouldn’t be unfair on someone else’s potential to earn.
  • Use AI like ChatGPT for information retrieval when other means of getting the same information is not practical. But try to compensate by being smarter with my distribution of wealth to others being affected (like how some of us support small businesses).

If you’re interested in the fine print. Find notes on the quick Kantian analysis I did here[2].

No comments:

Post a Comment