One of the fascinating aspects of artificial intelligence (AI) is its ability to answer questions about itself. Instead of spending tedious hours on Google, one can simply open a browser tab, pick the AI model and ask. In return, it provides seamless responses with elaborate, flowery sentences. I once had the ability to write long, flowing text, with added flare. Unfortunately, scientific writing pushes one into the norm of writing clear, short, precise sentences. For an AI that repeatedly confirms it has no gender bias or emotions, it sure can formulate sentences that seem to drone on in an almost Shakespearean way. It also seems to follow the rules of those students who write too much for a 2-marks question- it will give you a 5-marks-worth reply.

The answers, though, have good structure- starting with stating a point, elaborating on it and finally, concluding. So maybe there is something to learn from this python-coded language model that adapts and learns “using deep learning libraries such as TensorFlow and PyTorch.”

Contrary to popular opinion, AI is not a recent development. The earliest work in AI was performed in 1935 by a British computer pioneer, Alan M. Turing, consisting of a scanner and a machine with unlimited memory to perform actions like reading and writing symbols. In a public lecture in 1947, he mentioned “a machine that can learn from experience” and though he could not publish, the world received these concepts in the form of a chess-playing computer [2].

There were many that have previously tried. OpenAI is, in fact, the only one that has been successful and made a lasting impact till date. Recently, the platform has found competitors such as StabilityAI, which can create digital art and Anthropic and Character.AI, chatbots similar to ChatGPT [3]. 

Baidu, the Chinese-based tech company, developed an AI model for virtual assistants and autonomous driving technology. One can compare it to Alexa or Siri, but it also claims that the machine learning model can cater to a wider range of dialects and accents with the ability to recognize speech in noisy environments. This company was established in 2000 and has been working in AI technology since 2010 [4].  

Google’s DeepMind AI chatbot, Sparrow, has something that ChatGPT does not- the ability to cite sources. Evidence suggested that Sparrow is more accurate, supported by evidence 78% of the time but it has yet to be released for public use. Interestingly, even DeepMind was established and has been in this field since 2010 [5,6]. OpenAI, the American artificial intelligence research laboratory, was established in 2015 [1].

Technology has the primary objective of simplifying complex tasks and making them more accessible, efficient, and convenient. Previously, it was with Google Maps or Google’s famous- “___ near me” where one could ask for anything from petrol pumps, eateries, pharmacies or physicians. 

In 2018, OpenAI introduced the concept of “Generative Pre-trained Transformer (GPT)” that could perform text-based tasks like answering questions. Its consecutive models were trained on a vast number of books, online articles and websites and are now able to summarise and generate responses relevant to different languages and cultures. Unlike Google search, it doesn’t scour the web to match the most relevant web pages based on keywords, but provides answers (generates text) to specific questions [8].

Before the booming rise of ChatGPT, Nature Reviews published an article on artificial intelligence, calling it a revolutionary tool for science. It believes that AI encourages a multidisciplinary approach, categorising it into- weak, strong and ultrastrong machine learning based on its ability to learn and improve itself. When provided with a problem, AI models demonstrated good understanding of spatial dimensions during experiments in physics, providing reasonable and sometimes inspirational insights [9].  

In 2017, Science commented on the use of various AI models, using the term “AI neuroscience” to understand the neural networks that these large language models (LLMs) use to think. Since one does not understand how they reach certain conclusions, one cannot possibly have confidence in this Pandora’s box. Previously, scientists have concocted in-silico methods for primer designing and PCR to aid experiments but the future of AI is in carrying out such experiments and interpreting results, urging an era of automated science. Without confidence in these neural networks, such experiments are but a wispy cloud in the sky [7]. 

Once AI models can accurately interpret or analyse data what happens to scientists? Will the scientific community face the same fate as other job markets?
My fellow elite members, be it in agriculture, health or the environment, the problems of the living can only be comprehended by the living. Formulating innovative ideas is still a novel concept for AI. There may even be a future where AI will be able to design experiments, and I deem that will still be based on pre-performed proven data, but identifying gaps in scientific fields may prove to be a task for brilliant biological minds. Deciding on creating crop varieties with higher yield by genetically engineering plants is an idea that AI cannot wake up to. Yet. 

Nature has called ChatGPT “fluent but not factual” due to its unreliability to produce consistent interpretations. Apart from misleading information, the authors also question its safety and responsibility, asking for transparency and proper citations if used [10]. This brings the question, how much, quantitatively, can you use the aid of AI? Although this may improve in the future, with AI models provided with a plethora of data sets for training, its use is debatable. Lack of accountability and transparency, especially when discerning patent rights is another major flaw, since this technology blurs the lines between authenticity, plagiarism and sources [11]. Will the AI parent company ask for their pound of flesh?  

Each AI undergoes what is called a “Turing Test” (original concept by Alan Turing, as previously mentioned) which demonstrates the ability of any AI model to imitate or “exhibit intelligent behaviour” of a human [2]. Both ChatGPT and Google’s LaMDA, the Language Model for Dialogue Applications, have passed the Turing’s test. This means they have convinced a human judge that they are “human” by being able to hold an elaborate conversation with another human. My question is this, is it a bad thing that AI models have reached that level of “intelligence”? Is this a proper, fool-proof test for AI? Absolutely not. There is human bias, these systems are more “intelligent” simply because they have a faster processing speed for high throughput data. Is there a need to test “independence” in technology? Maybe. But for what reason?

There are contradictory opinions, but most of them stem from self-preservation. What happens if an AI can pass off as human? Will it take over the world? The time is not far when AI will rule. It can learn the way humans interact. The greatest flaw of AI is its lack of bias and emotions- but it can learn now. An abomination.Catastrophe! 

My reasoning is simple- just switch off the power supply. 

Until then, AI is but a technology and like all other past technologies, is made to simplify basic tasks. 

REFERENCES:

  1. OpenAI. (n.d.). Product. Retrieved March 18, 2023, from https://openai.com/product
  2. Encyclopedia Britannica. (n.d.). Evolutionary computing. In Encyclopedia Britannica. Retrieved March 18, 2023, from https://www.britannica.com/technology/artificial-intelligence/Evolutionary-computing
  3. Gupta, A. (2021, September 1). These are OpenAI’s strongest competitors right now. The Indian Express. https://indianexpress.com/article/technology/these-are-openais-strongest-competitors-right-now-8427624/
  4. Baidu. (n.d.). DuerOS. Retrieved March 18, 2023, from https://dueros.baidu.com/en/
  5. Vincent, J. (2022, January 12). DeepMind’s latest AI chatbot is designed to make conversation ‘as engaging as possible’. The Independent. Retrieved March 18, 2023, from https://www.independent.co.uk/tech/deepmind-ai-chatbot-chatgpt-openai-b2262862.html 
  6. Rodriguez, J. (2022, March 23). Inside Sparrow: The Foundation of DeepMind’s ChatGPT Alternative. Medium. Retrieved March 18, 2023, from https://jrodthoughts.medium.com/inside-sparrow-the-foundation-of-deepminds-chatgpt-alternative-854df43569fd
  7. Ruder, S. (2018, May 16). The AI Revolution in Science. Science. Retrieved March 18, 2023, from https://www.science.org/content/article/ai-revolution-science 
  8. Chen, J. (2020, September 28). The History of OpenAI’s ChatGPT. Taskade Blog. Retrieved March 18, 2023, from https://www.taskade.com/blog/openai-chatgpt-history/
  9. Yuan, J., Chen, C., Hu, X. et al. Recent advances in deep learning for drug discovery. Nat Mach Intell 4, 329–341 (2022). https://doi.org/10.1038/s42254-022-00518-3
  10. Stokel-Walker, C., & Van Noorden, R. (2023). What ChatGPT and generative AI mean for science. Nature, 614(7947), 214-216.
  11. van Dis, E. A., Bollen, J., Zuidema, W., van Rooij, R., & Bockting, C. L. (2023). ChatGPT: five priorities for research. Nature, 614(7947), 224-226. https://doi.org/10.1038/d41586-023-00288-7 

Leave a comment

Trending