1.0 INTRODUCTION: ChatGPT and other Generative AI tools can aid teaching and learning but should be used responsibly and students need guidance on ethical use and express instructions about when to and when not to use Generative AI tools. Inappropriate use of AI tools can result in the offence of academic dishonesty.
1.1 Lecturers should use CTE resources to familiarize themselves with how Generative AI works and how it is used by students.
1.2 Inappropriate use of AI tools should be regarded in the same category as other cases of academic dishonesty.
1.3 Lecturers should encourage positive use of AI technologies but should determine during the course planning the extent to which they would allow such use.
1.4 Lecturers should expressly indicate on the Course Outline and communicate at the beginning of the unit course or learning activity or assignment or assessment, the extent to which use of AI tools are permitted.
1.5 If the use of AI is permitted, lecturers should require appropriate transparency by mandating student acknowledgement of such use; for example by referencing, or by submitting annexes of the AI-generated output, or other appropriate requirement.
2.0 POSITIVE USE OF AI: Regardless of specific course regulations lecturers should allow general positive use of AI tools as learning assistants when such use aids learning and has no impact on academic integrity. Examples of such student use may include:
2.1 Co-creative buddy: to partly generate generalized plans or outlines for certain wide assignments or projects.
2.2 Study buddy: for generating study guides, learning processes and questions for student self-testing.
2.3 Feedback buddy: students obtaining feedback on their thinking or understanding.
3.0 CAUTION: Even in cases of authorized use of AI, lecturers should ensure responsible use in the following and other cases not expressly stated here.
3.1 Using AI for general editing and formatting may be acceptable but it could increase likelihood of AI detection on Turnitin or other detection tools. By default, such detection should be treated as evidence of unauthorized use unless and until the student proves otherwise.
3.2 Lecturers should require students to save a copy of their AI prompts and output/responses in case reference is needed or proof is needed.
3.3 Lecturers should advise students to conduct a critical check for facts and sources before trusting or submitting work after using AI tools.
3.4 Lecturers should require students to take author-level responsibility for any AI-generated concepts, facts or claims.
3.5 Lecturers should protect personal information and respect data privacy whenever they use AI tools and never give instructions to students that would violate such protections.