Skip to Main Content

Endeavour AI Guide: Home

USING GENERATIVE AI

Return Home

What is Generative Artificial Intelligence?

Generative artificial intelligence is a category of artificial intelligence (AI) that is trained on pre-existing datasets to generate new content. It is capable of producing text, images, code, video, and audio, and reviewing or remixing other existing material. ChatGPT is an example of a generative AI tool that uses large datasets from different sources to produce text in response to prompts. Although the content generative AI tools produce can be complex, it is not able to judge the quality or credibility of the information it uses and is not a reliable means of conducting research. AI tools can be trained on older datasets, meaning the information it has access to may also be outdated.



What is Academic Integrity?

Academic integrity refers to the values of honesty, trust, fairness, respect, and responsibility and underpins our work at the College. When a student uses generative artificial intelligence (AI) tools, maintaining academic integrity is essential. Outsourcing assessments to AI when it is not permitted is a breach of academic integrity.

The College embraces developments in generative AI and recognises it as having the potential to facilitate better learning environments and support student learning. However, the College also recognises the risks associated with such technology for teaching, learning, assessment, and academic integrity.


Using generative AI ethically

You are expected to use generative AI technologies ethically, critically, and effectively, and to develop essential AI literacy skills. This includes recognising the opportunities generative AI affords online, at home, and in the workplace as well as its risks and limitations.


By developing your critical thinking skills and AI literacy, you can learn to use AI tools effectively and appropriately.

College guidelines about the ethical use of generative AI include but are not limited to:

  • Using generative AI models in ethical and responsible ways that are consistent with the College’s Academic Integrity Policy – HE. All work a student produces in the College – spoken and written, assessed and non-assessed – must demonstrate honesty, trust, fairness, respect, and responsibility.
  • Check subject requirements, assessment instructions, and consult the Lecturer about appropriate use of generative AI. It may not be appropriate to use generative AI in all subjects or circumstances.
  • Generative AI can be a useful resource to support learning. For example, you can use it to explain concepts you are struggling to understand, create practice tests to prepare for an assessment, brainstorm a topic, proofread written work, and other ways to assist your learning. Generative AI should be used to create additional learning opportunities and should not replace key academic tasks such as analysis and interpretation of information, producing insights, and drawing conclusions.
  • In assessments/subjects where the use of generative AI is permitted, make sure any use of generative AI is acknowledged as ‘software’ in accordance with the APA 7 Referencing Guide.
    • o Using output from AI models without appropriate acknowledgement may be disregarded as academic misconduct. If unsure, seek advice from the subject’s Lecturer.
  • Critically review and evaluate any output from generative AI against reliable sources of information. Generative AI can produce authoritative sounding content that is inaccurate, biased, or incomplete.
    • You are accountable for any errors or omissions in material sourced from generative AI. Do not rely on generative AI as a primary source of information.
  • Abide by the rules set in assessment tasks where the use of generative AI is not permitted. In such assessments do not seek assistance from generative AI. Ensure that the entirety of the work submitted is an original creation.
    • Detection of the use of generative AI in an assessment where it is not allowed can constitute academic misconduct.
  • You should familiarise themselves with any relevant expectations of or constraints on the use of generative AI related to your future professional accreditation and be aware that these may be updated.
For more information on academic integrity refer to the Academic Integrity Policy
For more information on the use of generative AI refer to the AI Student Guideline

Below are some examples and tips on how students can use generative AI tools with integrity to support them in their studies.

Using generative AI to study and research

Appropriate Use

Jemima is working through the online learning modules on the subject LMS. She comes across some terms that she is unfamiliar with. The learning module explains these terms, but Jemima is confused. She refers to the additional resources the Lecturer has made available to find explanations and examples but still cannot get a good grasp of these terms. She asks generative AI to explain these concepts and provide examples. Jemima then posts a message on the Loop to her classmates and Lecturer to check if she has understood these terms correctly.

Jemima’s use of generative AI is appropriate as she is using it to support her learning. She utilises existing learning resources before she seeks assistance from generative AI. She also consults her Lecturer and classmates to confirm her understanding.

Inappropriate Use

Alice is completing a worksheet the Lecturer has assigned to the class. One of the questions asks for a list of resources on plants that can be used to lower blood pressure. Alice uses generative AI to create the list of resources. She copies and pastes the list of resources into the worksheet. She does not mention to her classmates or the Lecturer that she used generative AI to complete the task.

In this example Alice relies solely on generative AI to create a list of resources. This is both ineffective and unethical: It is ineffective because Alice has not verified the accuracy or relevance of these resources. Secondly, this behaviour demonstrates a lack of integrity. Alice has not taken on the responsibility to do the work herself and instead has outsourced it to an AI tool. She has not attributed the work to the generative AI tool.

Using generative AI for editing and revision

Appropriate Use

Frey is writing a discussion forum post for a first year naturopathy subject. The Lecturer has asked that the posts be written using a formal style and including in-text citations; the Lecturer has provided sample posts that model the style of writing students need to adopt. Frey compares their post with the sample posts and makes revisions. They then submit the post to Studiosity and use the feedback provided to make further revisions. Finally, they use an AI tool to check for language errors. The AI tool identifies a range of words and sentences as incorrect and suggests corrections. Frey analyses each word and sentence carefully, noting that not all are incorrect. They accept some word-level corrections. Frey logs back into Studiosity and uses the Connect Live feature to ask grammar specific questions.

Generative AI can be used for editing and revising text; however, it’s important that students use these tools to learn how to revise and edit. Frey seeks multiple feedback from Studiosity, where they can receive feedback from experts in the field of academic writing. Frey uses AI to make minor language edits to their work. Frey remains in control of their work.

Inappropriate Use

Charlie is writing a reflective essay and notices that the AI tool they are using embedded in Microsoft Word5 has highlighted language errors. Charlie asks the AI tool to correct the errors; they replace the original essay with the AI corrected version and submit it for marking.

Asking generative AI to correct errors and then submitting a piece of writing without verifying these corrections is not appropriate use of AI. Charlie could have asked Studiosity for feedback or requested to meet the Lecturer for guidance and feedback (LMS > Help > Consultations). These strategies would have been both more effective and ethical rather than relying on generative AI.

Using generative AI in assessments

Appropriate Use

Jose is working on an assignment where the use of generative AI is permitted. He analyses the assignment question and makes an outline of the sections and ideas he will include in the assignment. He books an academic consultation (LMS > Help > Consultations) and discusses with a content expert the outline he has made. He uses a generative AI tool to brainstorm further ideas. He identifies a few ideas from the AI generated output that are relevant and includes these in the outline. He rejects other ideas AI has offered. He then writes the assignment and submits it to Studiosity to receive feedback. Before Jose submits the assignment for marking, he adds an appendix to the assignment. In the appendix, he acknowledges the use of AI in creating the assessment and briefly explains how he used AI. He includes snapshots of prompts he used and the output generated.

Jose uses generative AI to brainstorm ideas for an assessment task. He does this only after he has analysed and planned the assignment himself. He critically reviews the output the AI has produced. This is important as the information provided by a generative AI may not be relevant or accurate. Jose acknowledges that he has used AI and provides details on how he has used it. This shows academic honesty. Jose has also made use of College academic support services as well as Studiosity. Receiving support and feedback from experts is important in developing the skills and knowledge necessary for Jose to succeed in future assessments, subjects and his future profession.

Inappropriate Use

Cooper is experiencing writer’s block! He needs to write a case study report and is not sure how to begin the writing or what to include. The assessment instructions don’t specify whether AI can be used to complete the assessment or not. Therefore, Cooper assumes that it is acceptable to ask AI to create the case study report for him. He writes the assessment prompt in a generative AI tool and instantly receives a report that meets the assessment word count. Cooper has a quick read of it, makes some changes and submits it for marking.

Cooper’s behaviour is a breach of academic integrity – he has outsourced an assessment to a third party. This constitutes contract cheating. It is not acceptable in any circumstance for a student to have a third party (e.g., AI software, a website, a friend etc.) produce an assessment. Making edits and revisions to an assessment that has been outsourced does not change the fact that the student has engaged in academic misconduct. Cooper could have used Studiosity's assignment calculator to manage his time planning and preparing for assessments. He could have asked the Lecturer or booked a consultation (LMS > Help > Consultations) to receive support in completing the assessment.



For more examples, visit this more comprehensive guide developed by the University of Western Australia here.

Further Useful Links