by Alla MC

Some of my takeaways from Rob's PLENARY:
In the year of 2024, Can we trust the research of technology assisted learning that is based on the technology that we do not use any more? Most of today’s studies quote researchers of 1990-2005. Are we sure it is good enough?
Why should we trust the research that doesn’t provide evidence of learners’ retention of knowledge gained this way? Yes, a spoonful of technology may help the medicine of learning go down in a more delightful way, but does technology help keep the knowledge in the learners' memory better?
Can we trust the machines? They already know so much about us, each of us, and we have very little privacy left. They already have a lot of information from us, and we keep providing them with data and feedback every moment we use them.
They have already learned to recognise and fake human facial expressions and the tone of voice and are improving by day. Shall we trust the machines which seem to be training themselves to replace us?
Do we really want to become AI tools assistants, their babysitters in the classroom? Or shall we find a way to let AI make us more efficient and professional?
Application to English day camps: I can use AI in assisting me organise the processes and program the camp, but I definitely do not need it between me and my campers.
Also, I will read the Terms and conditions of the Chat-gpt tool I have been using and will pay special attention to the part on the copyright of what AI creates for me.
Comments