{"id":403,"date":"2026-05-11T10:51:10","date_gmt":"2026-05-11T10:51:10","guid":{"rendered":"https:\/\/redzine.co.uk\/index.php\/2026\/05\/11\/what-happens-when-scientists-trust-ai-more-than-colleagues\/"},"modified":"2026-05-11T10:51:10","modified_gmt":"2026-05-11T10:51:10","slug":"what-happens-when-scientists-trust-ai-more-than-colleagues","status":"publish","type":"post","link":"https:\/\/redzine.co.uk\/index.php\/2026\/05\/11\/what-happens-when-scientists-trust-ai-more-than-colleagues\/","title":{"rendered":"What happens when scientists trust AI more than colleagues?"},"content":{"rendered":"<figure><img decoding=\"async\" src=\"https:\/\/images.theconversation.com\/files\/733300\/original\/file-20260430-57-sc06ca.jpg?ixlib=rb-4.1.0&amp;rect=0%2C1%2C7040%2C4693&amp;q=45&amp;auto=format&amp;w=1050&amp;h=700&amp;fit=crop\" \/><figcaption><span class=\"caption\"><\/span> <span class=\"attribution\"><a class=\"source\" href=\"https:\/\/www.shutterstock.com\/image-photo\/using-clinical-trials-investigative-methods-reach-2297780907?trackingId=cb63f2b1-df8a-4288-9401-58471473d694&amp;listId=searchResults\">Shutterstock\/PeoplesImages<\/a><\/span><\/figcaption><\/figure>\n<p>Artificial intelligence has crossed a threshold in the modern workplace. It is being used for everything from helping employees manage schedules to supporting financial forecasts. A similar shift is now unfolding inside research laboratories.<\/p>\n<p>There is currently a boom in national initiatives to accelerate the integration of AI into science. These include the <a href=\"https:\/\/www.whitehouse.gov\/presidential-actions\/2025\/11\/launching-the-genesis-mission\/\">US Genesis Mission<\/a> and <a href=\"https:\/\/www.msit.go.kr\/eng\/bbs\/view.do?sCode=eng&amp;mId=4&amp;mPid=2&amp;bbsSeqNo=42&amp;nttSeqNo=1200\">South Korea\u2019s AI Co-Scientist Challenge<\/a>. But despite clear benefits, we believe these institutional drives are neglecting important issues that carry immense risks for scientific research.<\/p>\n<p>Today, <a href=\"https:\/\/newsroom.wiley.com\/press-releases\/press-release-details\/2025\/AI-Adoption-Jumps-to-84-Among-Researchers-as-Expectations-Undergo-Significant-Reality-Check\/default.aspx\">more than half<\/a> of researchers use <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0048733325002100\">AI for work tasks<\/a> including reviews of academic journals and designing experiments. <\/p>\n<p><a href=\"https:\/\/deepmind.google\/science\/alphafold\/\">AlphaFold<\/a> is an AI tool developed to predict the structures of proteins for scientific research. Working out protein structures was incredibly time-consuming before its release \u2013 taking years in some cases. The same tasks now take hours. AlphaFold was acknowledged by the <a href=\"https:\/\/www.nobelprize.org\/prizes\/chemistry\/2024\/summary\/\">2024 Nobel Prize in Chemistry<\/a>. <\/p>\n<p>AI tools for use in medicine now assist with everything from <a href=\"https:\/\/www.nature.com\/articles\/s41586-019-1799-6\">the interpretation<\/a> of <a href=\"https:\/\/www.nature.com\/articles\/s41591-018-0300-7\">results from X-rays and MRIs<\/a> to supporting doctors\u2019 decisions on the <a href=\"https:\/\/www.nature.com\/articles\/s41591-018-0213-5\">diagnosis and treatment of disease<\/a>. <\/p>\n<p>Our key concern is that hasty adoption of AI may gradually erode the scientific culture and human relationships that sustain rigorous research. It starts with the <a href=\"https:\/\/arxiv.org\/abs\/2506.08872\">erosion of core thinking skills<\/a> among researchers, as a result of an <a href=\"https:\/\/www.mdpi.com\/2075-4698\/15\/1\/6\">increased reliance<\/a> on AI <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0001691825010388\">to perform that work<\/a>. This can alienate researchers from the deeper reasoning behind their work. <\/p>\n<h2>Loss of independent thinking<\/h2>\n<p><a href=\"https:\/\/www.mdpi.com\/2075-4698\/15\/1\/6\">Early-career scientists<\/a> are <a href=\"https:\/\/www.frontiersin.org\/journals\/psychology\/articles\/10.3389\/fpsyg.2022.839728\/full\">particularly vulnerable<\/a>, because they are still developing their scientific reasoning. Troubleshooting skills and the critical evaluation of ideas may be outsourced to AI systems. <\/p>\n<p>AI\u2019s fluent, confident and immediate responses <a href=\"https:\/\/www.nature.com\/articles\/d41586-023-00288-7\">can easily be mistaken<\/a> for authoritative information. Once researchers begin to treat AI outputs as implicitly correct, the responsibility for judgment calls may gradually shift from them to their machines.<\/p>\n<p>AI\u2019s persuasive arguments, probably drawn from mainstream ideas in their training data, could replace more rigorous, time-consuming and creative research approaches. These are traditionally shaped through critical back-and-forth discussions between researchers. <\/p>\n<p>This can evolve into over-dependence. As reasoning is delegated to AI, researchers become less confident at working unaided. Unfortunately, modern scientific labs are full of conditions <a href=\"https:\/\/www.frontiersin.org\/journals\/psychology\/articles\/10.3389\/fpsyg.2022.839728\/full\">that reinforce<\/a> this dependence, <a href=\"https:\/\/www.nature.com\/articles\/d41586-021-01751-z\">such as intense competition<\/a>, long hours and <a href=\"https:\/\/link.springer.com\/article\/10.1038\/embor.2013.35\">frequent isolation<\/a>.<\/p>\n<p>Limited mentorship and feedback from colleagues that is delayed, critical or <a href=\"https:\/\/www.tandfonline.com\/doi\/full\/10.1080\/03075079.2025.2588278\">politically influenced<\/a> can enhance this issue. In contrast, AI provides an immediate, <a href=\"https:\/\/journals.sagepub.com\/doi\/10.1177\/1745691612459058\">patient<\/a> and <a href=\"https:\/\/clutejournals.com\/index.php\/CIER\/article\/view\/4236\">nonjudgmental<\/a> alternative.<\/p>\n<p>Scientists interact with AI systems daily in order to check computer code, revise illustrations or charts, draft the language for grant applications, clarify scientific concepts, and at times, ask for personal advice.<\/p>\n<p>As researchers begin to trust the AI assistant, it can begin to function less like a tool and more like a companion. This phenomenon <a href=\"https:\/\/arxiv.org\/abs\/2504.14112\">bears the risk<\/a> of <a href=\"https:\/\/arxiv.org\/abs\/2503.17473\">emotional dependency<\/a>, too. When ChatGPT-4 <a href=\"https:\/\/www.theverge.com\/news\/756980\/openai-chatgpt-users-mourn-gpt-5-4o\">was retired<\/a>, many users expressed a <a href=\"https:\/\/www.theguardian.com\/lifeandstyle\/ng-interactive\/2026\/feb\/13\/openai-chatbot-gpt4o-valentines-day\">form of grief<\/a>. <\/p>\n<h2>Replacing relationships<\/h2>\n<p>Another important concern is the potential for replacement of human relationships in the office or research lab. AI is always available, nonjudgmental, noncompeting \u2013 and indifferent to office politics, with no ego to defend. It remembers context, adapts to individual working styles, and offers reassurance without social cost. <\/p>\n<p>Human scientific relationships <a href=\"https:\/\/www.tandfonline.com\/doi\/full\/10.1080\/03075079.2025.2588278\">are more complicated<\/a>, involving <a href=\"https:\/\/journals.sagepub.com\/doi\/10.1177\/1745691612459058\">nuance<\/a>, criticism, time constraints, <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0048733324002142\">hierarchy<\/a> \u2013 and sometimes, ulterior motives. For early-career researchers especially, <a href=\"https:\/\/journals.uj.ac.za\/SOTL\/index.php\/sotls\/article\/view\/301\">these interactions<\/a> can <a href=\"https:\/\/www.chemistryworld.com\/news\/worldwide-survey-of-phd-students-reveals-bullying-discrimination-and-anxiety\/4010693.article\">feel risky<\/a>. <\/p>\n<figure class=\"align-center \">\n            <img decoding=\"async\" alt=\"Researcher at work\" src=\"https:\/\/images.theconversation.com\/files\/733395\/original\/file-20260430-57-66o852.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\"><figcaption>\n              <span class=\"caption\">Early career researchers may be particularly at risk of over-reliance on AI systems for advice.<\/span><br \/>\n              <span class=\"attribution\"><a class=\"source\" href=\"https:\/\/www.shutterstock.com\/image-photo\/man-college-student-outdoor-laptop-writing-2618932791?trackingId=734acc0b-5029-4564-9242-2b3180c05921&amp;listId=searchResults\">PeopleImages \/ Shutterstock<\/a><\/span><br \/>\n            <\/figcaption><\/figure>\n<p>Critical feedback from humans can feel adversarial, while <a href=\"https:\/\/www.nature.com\/articles\/d41586-025-02190-w\">AI responses feel supportive<\/a>. So, early-career scientists might have good reason to prefer testing ideas or seeking validation through AI, rather than their peers or superiors. <\/p>\n<p>The scientific community cannot thrive without opposing ideas, deep scepticism against consensus, vigorous debate and rigorous mentoring. If AI begins to replace these, it threatens the foundations on which scientific progress has always been made.<\/p>\n<p>The current debate on AI safety mostly focuses on errors in models\u2019 responses, or on AI systems circumventing the restrictions imposed on the way they work, known as <a href=\"https:\/\/insidegovuk.blog.gov.uk\/2024\/11\/05\/gov-uk-chat-understanding-and-addressing-jailbreaking-in-our-generative-ai-experiment\/\">\u201cjailbreaking\u201d<\/a>. Such rules have limited effects when it comes to the AI models\u2019 societal and cultural impact. <\/p>\n<p>Given the recent drives to get scientists to work more closely with AI assistants, we should educate our young scientists on the <a href=\"https:\/\/arxiv.org\/abs\/2506.08872\">risks of AI dependence<\/a>. We also need benchmarks to rigorously test AI models for their ability to establish boundaries with users, to prevent overdependence and other unhealthy interactions. <\/p>\n<p>Finally, all of us \u2013 but especially institutional leaders \u2013 should understand the capabilities and permanence of AI companionship. They are here to stay, and we should learn to make our relationships with them as healthy as possible.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/counter.theconversation.com\/content\/281374\/count.gif\" alt=\"The Conversation\" width=\"1\" height=\"1\" \/><\/p>\n<p class=\"fine-print\"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.<\/span><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Shutterstock\/PeoplesImages Artificial intelligence has crossed a threshold in the modern workplace. It is being used for everything from helping employees manage schedules to supporting financial forecasts. A similar shift is now unfolding inside research laboratories. There is currently a boom in national initiatives to accelerate the integration of AI into science. These include the US [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-403","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/redzine.co.uk\/index.php\/wp-json\/wp\/v2\/posts\/403","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/redzine.co.uk\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/redzine.co.uk\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/redzine.co.uk\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/redzine.co.uk\/index.php\/wp-json\/wp\/v2\/comments?post=403"}],"version-history":[{"count":0,"href":"https:\/\/redzine.co.uk\/index.php\/wp-json\/wp\/v2\/posts\/403\/revisions"}],"wp:attachment":[{"href":"https:\/\/redzine.co.uk\/index.php\/wp-json\/wp\/v2\/media?parent=403"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/redzine.co.uk\/index.php\/wp-json\/wp\/v2\/categories?post=403"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/redzine.co.uk\/index.php\/wp-json\/wp\/v2\/tags?post=403"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}