Skip to Main Content

AI & GPT: Uses and Ethics of AI

A research guide intended to educate WPI faculty, staff, and students about the current state of AI in science and academia, and suggest methods and considerations for ethical use of AI in research.

GPT Platforms and Tools

Using AI

Opportunities

Generative AI tools can help you with:

  • Brainstorming, editing, improving grammar, sentence construction and other language elements
  • Summarizing information like your class or research notes
  • Generating practice or quiz questions for
    exam preparation.
  • Translating text into different languages
  • Providing stimulus for ideas that you extend and develop
  • Organizing your study time

Limitations and Concerns

Generative AI tools and their output:

  • Are based on a limit and an unknown set of data
  • Can be generic and lack true understanding of the content area
  • Generate false information (referred to as
    hallucinations)
  • Don't provide citations for the information
    they give you
  • Include biased information and reproduce
    errors in their sources

 

Basically, generative AI is good at making new things - if you give it specific building blocks, it'll build from that! But generative AI is terrible at retrieving existing things - that's what search engines are for, and the two tools are not interchangeable.

________________________________________________

Learn Prompting: Your Guide to Communicating with Artificial Intelligence

"Learn how to use ChatGPT and other AI tools to accomplish your goals using our free and open source curriculum, designed for all skill levels!"

Artificial Intelligence: A Graduate-Student User’s Guide (Leonard Cassuto, Chronicle of Higher Education, 7-25-23)

"AI can play a positive role in a doctoral student’s research and writing — if we let it." Advice is relevant to students or researchers at every level.

How to use AI to do practical stuff: A new guide (Ethan Mollick, 29 March 2023)

Clear, concise explanation of Large Language Models and some of the things they are useful for, including: write stuff; make images; come up with ideas; make videos; coding; and learn stuff.  Concludes: "AI is a tool. It is not always the right tool. Consider carefully whether, given its weaknesses, it is right for the purpose to which you are planning to apply it. There are many ethical concerns you need to be aware of. AI can be used to infringe on copyright, or to cheat, or to steal the work of others, or to manipulate. And how a particular AI model is built and who benefits from its use are often complex issues, and not particularly clear at this stage. Ultimately, you are responsible for using these tools in an ethical manner."

15 times to use AI and 5 not to (Ethan Mollick, 9 December 2024)

"There are several types of work where AI can be particularly useful, given the current capabilities and limitations of LLMs. Though this list is based in science, it draws even more from experience. Like any form of wisdom, using AI well requires holding opposing ideas in mind: it can be transformative yet must be approached with skepticism, powerful yet prone to subtle failures, essential for some tasks yet actively harmful for others. I also want to caveat that you shouldn't take this list too seriously except as inspiration - you know your own situation best, and local knowledge matters more than any general principles. With all that out of the way, below are several types of tasks where AI can be especially useful, given current capabilities—and some scenarios where you should remain wary."

AI Ethics and Concerns

Video by Jane Stimpson of the Massachusetts Library Association. Recorded October 2023.

Do's and Don'ts (zapier.com)

list of do's and don'ts in using AI from Zapier.com

AI Research Ethics Guidelines

Ethical Considerations:

  • Bias and Fairness:
    • Researchers should be aware of potential biases in AI algorithms and training data that could lead to unfair or discriminatory outcomes.
    • Guidelines should emphasize the importance of using diverse and representative datasets and employing bias detection and mitigation techniques.
    • Regularly evaluate AI models for fairness across different demographic groups.
  • Transparency and Explainability:
    • Promote the use of AI models that are transparent and whose decision-making processes can be understood and interpreted (Explainable AI - XAI).
    • Document the AI methods and models used, including data sources, hyperparameters, and model performance metrics, to ensure traceability.
  • Accountability and Responsibility:
    • Clearly define the roles and responsibilities of researchers when using AI tools.
    • Researchers are accountable for the outputs generated by AI and must verify the accuracy and reliability of the information.
    • Establish mechanisms for auditing AI systems and addressing unintended consequences.
  • Privacy and Data Governance:
    • Adhere to data protection regulations and institutional policies regarding the collection, storage, and use of research data in AI applications.
    • Ensure that the privacy of research participants is maintained and that data is handled securely.
    • Understand the terms of service of AI tools, especially regarding data ownership and usage by third-party providers.
  • Informed Consent:
    • Obtain informed consent from participants when AI systems are involved in data collection or interact with human subjects.
    • Clearly explain how AI will be used in the research and the potential risks and benefits.
  • Intellectual Property and Authorship:
    • Address issues related to intellectual property rights when using AI tools for content generation or analysis.
    • Clearly define authorship responsibilities when AI contributes to research outputs, adhering to relevant publication ethics guidelines.

Best Practices for Using AI in Research:

  • Complementary Tool: AI should be viewed as a complementary resource to enhance research, not a replacement for critical thinking and researcher expertise.
  • Verification and Validation: Researchers must critically evaluate and verify the outputs generated by AI tools using reliable sources and their own expertise.
  • Literature Review: Exercise caution when using AI for literature reviews; always verify the accuracy and relevance of cited sources.
  • Data Quality: Use high-quality, well-documented data for training and analysis with AI models.
  • Transparency in Publications: Disclose the use of AI tools in research publications, including the name and version of the tool and how it was used.
  • Reproducibility: Ensure that AI-assisted research is reproducible by documenting the data, code, and AI model parameters used.
  • Human Oversight: Maintain human oversight in critical decision-making processes where AI is involved.
  • Training and Education: Provide researchers with adequate training and resources on the responsible and ethical use of AI tools.

This text was copied from Rukmal Ryder's excellent guide on AI and Information Literacy at Salem State University. Link to source.

Articles discussing AI Ethics and Bias

AI ethics: The ethical issues of artificial intelligence, Harry Guinness (zapier.com) 3/22/23 

"With the rise of text-generating AI tools like GPT-3 and GPT-4image-generating AI tools like DALL·E 2 and Stable Diffusion, voice-generating AI tools like Microsoft's VALL-E, and everything else that hasn't been announced yet, we're entering a new era of content generation. And with it comes plenty of thorny ethical issues."

Using AI:  Cases and Concerns (West Virginia University library guide)

"ChatGPT can clearly help with generating text and assisting with various language-related tasks. While ChatGPT and text generators like it have many potential benefits, there are also certain limitations and challenges associated with their use." The guide reviews five particular risk that "should raise concerns among all academics:"

This is how AI bias really happens—and why it’s so hard to fix, Karen Hao (MIT Technology Review), 2/4/19

"We often shorthand our explanation of AI bias by blaming it on biased training data. The reality is more nuanced: bias can creep in long before the data is collected as well as at many other stages of the deep-learning process. For the purposes of this discussion, we’ll focus on three key stages."

Scholars are supposed to say when they use AI; do they?, Stephanie Lee (The Chronicle), 12/18/24

"Many publishers now require authors to disclose when they use large-language models like ChatGPT to help write their papers. But a substantial number seemingly aren’t, according to Alex Glynn, a research literacy and communications instructor at the University of Louisville."