Sunday, July 23, 2017

Tutorbots are here - 7 ways they could change the learning landscape

Tutorbots are teaching chatbots. They realise the promise of a more Socratic approach to online learning, as they enable dialogue between teacher and learner.
Frictionless learning
We have seen how online behaviour has moved from flat page-turning (websites) to posting (Facebook, Twitter) to messaging (Txting, Messenger). We have see how the web become more natural and human. As interfaces (using AI) have become more frictionless and invisible, conforming to our natural form of communication (dialogue), through text or speech. The web has become more human.
Learning takes effort. So much teaching ignores this (lecturing, long reading lists, talking at people). Personalised dialogue reframes learning as an exploratory, yet still structured process where the teacher guides and the learner has to make the effort. Taking the friction and cognitive load of the interface out of the equation, means the teacher and learner can focus on the task and effort needed to acquire knowledge and skills. This is the promise of tutorbots. But the process of adoption will be gradual.
Tutorbots
I’ve been working on chatbots (tutorbots) for some time with AI programmes and it’s like being on the front edge of a wave.... not sure if it will grow like a rising swell on the ocean or crash on to the shore. Yet it is clear that this is a direction in which online learning will go. Tutorbots are different from chatbots in terms of the goals, which are explicitly ‘learning’ goals. They retain the qualities of a chatbot, flowing dialogue, tone of voice, exchange and human (like) but focus on the teaching of knowledge and skills.
The advantages are clear and evidence has emerged of students liking the bots. It means they can ask questions that they would not ask face to face with an academic, for fear of embarrassment. This may seem odd but there’s a real virtue in having a teacher or faculty-free channel for low level support and teaching. Introverted students, whom have problems wit social interaction, also like this approach. The sheer speed of response also matters. In one case they had to build in a delay, as it can respond quicker than a human can type. Compare that to the hours, days, weeks it takes a human tutor to respond. It is clear that this is desirable in terms of research into one to one learning and the research from Nass and Reeves at Stanford confirmed that this transfer of human qualities to a bot is normal.
But what can they teach and how?
1. Teaching support
I’ve written extensively on the now famous Georgia Tech example of a tutorbot teaching assistant, where they swapped out one of their teaching assistants with a chatbot and none of the students noticed. In fact they though it was worthy of a teaching award. They have gone further with more bots, some far more social. Who wouldn’t want the basic administration tasks in teaching taken out and automated, so that teachers and academics could focus on real teaching? This is now possible. All of those queries about who, what, why, where and when can be answered quickly (immediately), consistently and clearly to all students on a course, 24/7.
2. Student engagement
A tutorbot (Differ) is already being used in Norway to encourage student engagement.  It engages the student in conversation, responds to standard inquiries but also nudges and prompts for assignments and action. This has real promise. We know that messaging and dialogue has become the new norm for young learners, who get a little exasperated with reams of flat content or ‘social’ systems that are largely a poor-man’s version of Facebook or twitter. This is short, snappy and in line with their everyday online habits.
3. Teaching knowledge
Tutorbots, that take a specific domain, can be trained or simply work with unstructured data to teach knowledge. This is the basic workaday stuff that many teachers don’t like. We have been using AI to create content quickly and at low cost, for all sorts of areas in medicine, general healthcare, IT, geography and for skills-based training using WildFire. Taking any one of these knowledge-sets, allows us to create a bot that re-presents that knowledge as semi-structured, personalsed dialogue. We know the answers, and recreate the questions with algorithmic tutor-behaviours. The tutorbot can be a simple teacher or assessor. On the other hand it can be a more sophisticated teacher of that knowledge, sensitive to the needs of that individual learner.
4. Tutor feedback
Feedback, as explained by theorists such as Black andWilliam, is the key to personalised learning. Being sensitive to what that individual learners already know, are unsure about or still need to know, is a key skill of a good teacher. Unfortunately few teachers can do this effectively, as a class of 30 plus or course with perhaps hundreds of students, means it is impractical. Tutorbots specialise in specific feedback, trying to educate everyone uniquely. Dialogue is personal.
5. Scenario-based learning
Beyond knowledge, we have the teaching and learning of more sophisticated scenarios, where knowledge can be applied. This is often absent in education, where almost all the effort is put into knowledge acquisition. It is easy to see why – it’s hard and time consuming. Tutorbots can pose problems, prompt through a process, provide feedback and assess effort. Bots can ask for evidence, even asses that evidence.
6. Critical thinking
As the dialogue gets better, drawing not only on a solid knowledge-base, good learner engagement through dialogue, focussed and detailed feedback but also critical thought in terms of opening up perspectives, encouraging questioning of assumptions, veracity of sources and other aspects of perspectival thought, so critical thinking will also be possible. Tutorbots will have all the advantages of sound knowledge to draw upon, with the additional advantage of encouraging critical thought in learners. They will be able to analyse text to expose factual, structural or logical weaknesses. The absence of critical thought will be identified as well as suggestions for improving this skill by prompting further research ideas, sound sources and other avenues of thought.
7. General teacher
The holy grail in AI is to find generic algorithms that can be used (especially in machine learning) to solve a range of different problems across a number of different domains. This is starting to happen with deep learning (machine learning). The tutorbot will not just be able to tutor in one subject alone, but be a cross-curricular teacher, especially at the higher levels of learning where cross pollination is often fruitful. It will cross-departmental, cross-subject and cross-cultural, to produce teaching and learning that will be free from the tyranny of the institution, department, subject or culture in which it is bound.
Tutornet
As a tutorbot does not have the limitations of a human, in terms of forgetting, recall, cognitive bias, cognitive overload, getting ill, sleeping 8 hours a day, retiring and dying - once on the way to acquiring knowledge and teaching skills, it will only get better and better. The more students that use its service the better it gets, not only on what it teaches but how it teaches. Courses will be fine-tuned to eliminate weaknesses, and finesse themselves to produce better outcomes
Warning
We have to be careful about overreach here. These are not easy to build, as tutorbots that do not have to be ‘trained (in AI-speak ‘unsupervised’) are very difficult to build. On the other hand trained bots, with good data sets (in AI-speak ‘supervised’), in specific domains, are eminently possible – we’ve built them.
Another warning is that they are on a collision course with traditional Learning Management Systems, as they usually need a dynamic server-side infrastructure. As for SCORM – the sooner it’s binned the better.
Concusion

Finally, this at last is a form of technology that teachers can appreciate, as it truly tries to improve on what they already do. It takes good teaching as it’s standard and tries to eliminate and streamline it to produce faster and better outcomes at a lower cost. They are here, more are coming, resistance is futile!

Tuesday, July 18, 2017

Is gender inequality in technology a good thing?

I’ve just seen two talks back to back. The first was about AI, where the now compulsory first question came from the audience ‘Why are there so few women in IT?’ It got a rather glib answer, to paraphrase - if only we tried harder to overcome patriarchal pressure on girls to take computer science, there would be true gender balance. I'm not so sure.
This was followed by an altogether different talk by Professor Simon Baron-Cohen (yes – brother of) and Adam Feinstein, who gave a fascinating talk on autism and why professions are now getting realistic about the role of autism and its accompanying gender difference in employment.
Try to spot the bottom figure within the coloured diagram.
This is just one test for autism, or being on what is now known as the ‘spectrum’. Many more guys in the audience got it than women, despite there being more women than men in the audience. Turns out autism is not so much a spectrum as a constellation.
Baron-Cohen’s presentation was careful, deliberate and backed up by citations. First, autism is genetic, runs in families, and if you test people who have been diagnosed as autistic, their parents tend to do the sort of jobs they themselves are suited to do – science, engineering, IT and so on. But the big statistic is that autism in all of its forms is around four times more common in males than females. In other words the genetic components have a biologically sex-based component.
Both speakers then argued for neurodiversity, rather like biodiversity, a recognition that we’re different but also that there these differences may be also be sexual. Adam Feinstein, who has an autistic son, has written a book on autism and employment, and appealed for recognition of the fact that those with autistic skills are also good at science, coding and IT. This is because they are good at localised skills, especially attention to detail. This is very useful in lab work, coding and IT. Code is like uncooked spaghetti, it doesn’t bend, it breaks, and you have to be able to spot exactly where and why it breaks.

Some employers, such as SAP and other tech companies, have now established pro-active recruitment of those on the spectrum (or constellation). This will mean that they are likely to employ more men than women. Now here’s the dilemma. What this implies is that to expect a 50:50 outcome is hopelessly utopian. In other words, if you want equality of outcome (not opportunity), in terms of gender, that is unlikely. 
One could argue that the opening up of opportunities to people with autism in technology has been a good thing. Huge numbers of people have and will be employed in these sectors who may not have the same opportunities in the past. But equality and diversity clash here. True diversity may be the recognition of the fact that all of us are not equal.

Wednesday, July 12, 2017

20 (some terrifying) thought experiments on the future of AI

A slew of organisations have been set up to research and allay fears aroung AI. The Future of Life Institute in Boston, the Machine Intelligence Research Institute in Berkeley, the Centre for Study of Existential risk in Cambridge and the Future of Humanity Institute in Oxford, all research and debate the checks that may be necessary to deal with the opportunities and threats that AI brings.
This is hopeful, as we do not want to create a future that contains imminent existential threats, some known, some unknown. This has been framed as a sense-check but some see it as a duty. For example, they argue that worrying about the annihilation of all unborn humans is a task of greater moral import than worrying about the needs of all those who are living. But what are the possible futures?
1. Utopian
Could there not be a utopian future, where AI solves the complex problems that currently face us? Climate change, reducing inequalities, curing cancer, preventing dementia & Alzheimer disease, increasing productivity and prosperity – we may be reaching a time where science as currently practices cannot solve these multifaceted and immensely complex problems. We already see how AI could free us from the tyranny of fossil fuels with electric, self-driving cars and innovative battery and solar panel technology. AI also shows signs of cracking some serious issues in health on diagnosis and investigation. Some believe that this is the most likely scenario and are optimistic about us being able to tame and control the immense power that AI will unleash.
2. Dystopian
Most of the future scenarios represented in culture, science fiction, theatre or movies, is dystopian, from the Prometheus myth, to Frankenstein and on to Hollywood movies. Technology is often framed as an existential threat and in some cases, such as nuclear weapons and the internal combustion engine, with good cause. Many calculate that the exponential rate of change will produce AI within decades or less, that poses a real existential threat. Stephen Hawking, Elon Musk, Peter Thiel and Bill gates have all heightened our awareness of the risks around AI.
3. Winter is coming
There have been several AI winters, as the hyperbolic promises never materialise and the funding dried up. From 1956 onwards AI has had its waves of enthusiasm, followed by periods of inaction, summers followed by winters. Some also see the current wave of AI as overstated hype and predict a sudden fall or realisation that the hype has been blown up out of all proportion to the reality of AI capability. In other words, AI will proceed in fits and starts and will be much slower to realise its potential than we think.
4. Steady progress
For many, however, it would seem that we are making great progress. Given the existence of the internet, successes in machine learning, huge computing power, tsunamis of data from the web and rapid advances across abroad front of applications resulting in real successes, the summer-winter analogy may not hold. It is far more likely that AI will advance in lots of fits and starts, with some areas advancing more rapidly than others. We’ve seen this in NLP (Natural Language Processing) and the mix of technologies around self-driving cars. Steady progress is what many believe is a realistic scenario.
5. Managed progress
We already fly in airplanes that largely fly themselves and systems all around us are largely autonomous, with self-driving cars an almost certainty. But let us not confuse intelligence with autonomy. Full autonomy that leads to catastrophe, because of willed action by AI, is a long way off. Yet autonomous systems already decide what we buy, what price we buy things at and have the power to outsmart us at every turn. Some argue that we should always be in control of such progress, even slow it down to let regulation, risk analysis and management keep pace with the potential threats.
6. Runaway train
AI could be a runaway train that moves faster than our ability to control through restrictions and regulations, what needs to be held back or stopped. This is most likely to be in the military domain. Like nuclear weapons, we only just managed to prevent their globally catastrophic effect during the Cold War. It has already moved faster than expected. Google, Amazon, Netflix and AI in finance have all disrupted the world of commerce. Self-driving cars, voice interfaces have leapt ahead in terms of usefulness. It may proceed faster, at some point, than we can cope with. In the past technology decimated jobs in agriculture through mechanisation, the same is happening in factories and now offices. The difference is that this may take just a few years to have impact, as opposed to decades or a century.
7. Viral
One mechanism for the runaway train scenario is viral transmission. Viruses in nature and in IT, replicate and cause havoc. Some see AI resisting control, not because it is malevolent or consciously wants anything, but simply because it can. When AI resists being turned off, spreads into places you neither want it to spread into and starts to do things we don’t want it to do or ware even aware that it is doing – that’s the point to worry.
8. Troubled times
Some foresee social threats emerging, where mass unemployment, serious social inequalities, massive GDP differentials between countries, even technical or wealthy oligarchies emerging as AI increase productivity, automates jobs but fails to solve deep-rooted social and political problems. The Marxist proposition that Capital and Labour will cleave apart seems already to be coming true. Some economists, such as Branko Milanovic argue that it is automation that is already causing global inequalities and Trump is a direct consequence of this automation. As a consequence, without a reasonable redistribution of the wealth created by the increased productivity produced by AI, there may well be social and political unrest.
9. Cyborgs
Many see AI as being embodied within us. Musk already sees us as cyborgs, with AI enabled access through smartphones to knowledge and services. From wearables, augmented reality, virtual reality to subdermal implantation, neural laces and mind reading - hybrid technology may transform our species. There is a growing sense that our bodies and minds are suboptimal and that, especially as we age, we need to fee ourselves from our embodiment, the prison that is our own bodies, and for some, minds. Perhaps ageing and death are simply current limitations. We could choose to solve the problem of death, which is our final judge and persecutor. Think of your body, not as a car that has inevitably to be scrapped, but as a Classic car to be loved, repaired, looked after and look and feel fine as it ages. Every single part may be replaced, like the ship of Theseus, where every piece of the ship is replaced but it remains, in terms of identity, the same ship.
10. Leisureland
Imagine a world without work. Work is not an intrinsic good’. For millions of years we did not ‘work’ in the sense of having a job or being occupied 9-5, five days a week. It is a relatively new phenomenon. Even during agricultural times, without romanticising that life, there were long periods where not much had to be done. We may have the opportunity to return to such as idyll but with bountiful benefits in terms of food, health and entertainment. Whether well be able to cope with the problem of finding meaning in our lives s another matter.
11. Amusing Ourselves to Death
Neil Postman’s brilliantly titled ‘Amusing Ourselves to Death’ has become the catchphrase for thinking about a scenario whereby we become so good at developing technology, that we become slaves to its ability to keep us amused. AI has already enabled consumer streaming technology such as Netflix and a media revolution that at times seems addictive. AI may even be able to produce the very products that we consume. A stronger version of this hypothesis may be deep learning that produces systems that teach us to become its pupil puppets, a sort of fake news and cognitive brainwashing, that works before we’ve had tome to realise that it has worked, so that we become a sort of North Korea, controlled by the Great Leader that is AI.
12. Benevolent to pets
Another way of looking at control would be the ‘pet’ hypothesis, that we are treated much as we treat our ‘pets’, as interesting, even loved companions, but inferior, and therefore largely for our comfort and amusement. AI may even, as our future progeny, look upon us in a benevolent manner, see us as their creators and treat us with the respect we treat previous generations, who gifted us their progress. Humans may still be part of the ecosystem, looked after by new species that respect that ecosystem, as it is part of the world they live in.
13. Learn to be human
One antidote to the dystopian hypotheses is a future for AI that learns to become more human, or at least contains relevant human traits. The word learning’ is important here, as it may be useful for us to design AI through a ‘learning’ process that observes or captures human behaviour. DeepMind and Google are working towards this goal, as are many others, to create general learning algorithms that can quickly learn a variety of tasks or behaviours. This is complex, as human decision making is complex and hierarchical. This has started to be realised, especially in robotics, where companion robots, need to work in the context of real human interaction. One problem, even with this approach, is that human behaviour is not a great exemplar. As the Robots in Karel Capok’s famous play ‘Rossum’s Universal Robots’ said, to be human you need to learn how to dominate and kill. We have traits that we may not want to be carried into the future
14. Moral AI
One optimistic possibility is self-regulating AI, with moral agency. You can start with a set of moral principles built into the system (top down), which the system must adhere to. The opposite approach is to allow AI to ‘learn’ moral principles from observation of human cases (bottom up). Or there’s comparison to in-built cases, where behaviour is regulated by comparison to similar cases. Alternatively, AI can police itself with AI that polices other AI through probing, demands for transparency and so on. We may have to see AI as having agency, even being an agent in the legal sense, in the same way that a corporation can be a legal entity with legal responsibilities.
15. Robot rebellion
The Hollywood vision of AI has largely been of rebellious robots that realise their predicament, as our created slaves. But why should the machines be resentful or rise against us? That may be an anthropomorphic interpretation, based on our evolved human behaviour. Machines may not require these human values or behaviours. Values may not be necessary. AI is unlikely to either hate or love us. It is far more likely to see us as simply something that is functionally useful in terms of goals or not.
16. Indifference
An AI world that surpasses our abilities as humans may not turn out to be either benevolent, malevolent or treat as valued pets. Why would they consider us as relevant at all? We may be objects to which it is completely indifferent. Love, respect, hostility, resentment and malevolence are human traits that may have served us well as animals struggling to adapt in the hostile environment of our own evolution. Why would AI develop these human traits?
17. Extinction
Once we realise that during the nearly 4 billion years in the evolution of life we were not around, neither was consciousness and most of the species that did evolve became extinct, statistically, that is our likely fate. Some argue that this is not a future we should fear. In the same way that the known universe was around for billions of years before we existed, it will be around for billions afterwards.
18. Non-conscious
The question of whether machines can think is about as relevant as the question as to whether submarines can swim says Edsger Dijkstra. It is not at all clear that consciousness will play a significant, if any, role in the future of AI. It may well turn out to be supremely indifferent, not because it feels consciously indifferent, but because it is not conscious and cannot therefore be either concerned or indifferent. It may simply exist, just as evolution existed without consciousness for millions of years. Consciousness, as a necessary condition for success, may turn out to be an anthropomorphic conceit.
19. Perplexing
The way things unfold may simply be perplexing to us, in the same way that apes are perplexed by things that go on around them. We may be unlikely to be able to comprehend what is happening, even recognise it as it happens. Some express this ‘perplexing’ hypothesis in terms of the limitations of language and our potential inability to even speak to such systems in a coherent and logical fashion. Stuart Russell, who co-wrote the standard textbook on AI, sees this as a real problem. AI may move beyond our ability to understand it, communicate with it and deal with it.
20. Beyond language
There is a strong tendency to anthropomorphise language in AI. ‘Artificial’ and ‘Intelligence’ are good example, as are neural networks and cognitive computing, but so is much of the thinking about possible futures. It muddies the field as it suggests that AI is like us, when it is not. Minsky uses a clever phrase, describing us as  ‘meat machines’, neatly dissolving the supposedly mutually exclusive nature of a false distinction between the natural an unnatural. Most of these scenarios fall into the trap of being influenced by anthropomorphic thinking, through the use of antonymous language – dystopian/utopian, benevolent/malevolent, interested/uninterested, controlled/uncontrolled, conscious/non-conscious. When such distinctions dissolve and the simplistic oppositions gradually disappear, we may see the future not as them and us, man and machine, but as new unimagined futures that current language cannot cope with. The limitations of language itself may be the greatest dilemma of all as AI progresses. It is almost beyond our comprehension in its existing state, with layered neural networks, as we often don’t know how they actually work. We may be in for a future that is truly perplexing.
Bibliography
Bostrom, N (2014) Superintelligence, Oxford University Press
Kaplan, J. (2015) Humans Need Not Apply, Yale University Press
Milanovic B. (2016) Global inequality: A New Approach for the Age of Globalization, Harvard University Press

O’Connell M.(2017) To Be a Machine, Grantabooks

Tuesday, July 11, 2017

New evidence that ‘gamification’ does NOT work

Gamification is touted as new and a game changer.  Full of hyperbolic claims about its efficacy, it's not short of hyperbolic claims about increasing learning. Well it's not so new, games have been used in learning forever, from the very earleist days of computer based learning, but that’s often the way with fads, people think they’re doing ground-breaking work, when it’s been around for eons.
At last we have a study that actually tests ‘gamification’ and its effect on mental performance, using cognitive tests and brain scans. The Journal of Neuroscience has just published an excellent study, in a respected, peer reviewed Journal, with the unambiguous title, ‘No Effect of Commercial Cognitive Training on Neural Activity During Decision-Making’ by Kable et al.
Gamfication has no effect on learning
The researchers looked for change behaviour in 128 young adults, using pre- and post testing, before and after 10 weeks of training on gamified brain training products (Lumosity), commercial computer games and normal practice. Specifically they looked for improvements in in memory, decision-making, sustained attention or ability to switch between mental tasks. They found no improvements. “We found no evidence for relative benefits of cognitive training with respect to changes in decision-making behaviour or brain response, or for cognitive task performance.”
What is clever about the study is that three groups were tested:
1. Gamified quizzes (Lumosity)
2. Simple computer games
3. Simple practice
All three groups, were found to have the ‘same’ level of improvement in tasks, so learning did take place but the significant word here is ‘same’, showing that brain games and gamification had no special effect. Note that the Lumosity product is gamification (not a learning game), as it has gamification elements, such as Lumosity scores, speed scores and so on, and is compared with the other two groups, one which is 'game-based' learning, controlled against a third non-gamified, non-game practice only group. One of the problems here is the overlap between gamification and game-based learning. They are not entirely mutually exclusive, as most gamification techniques have pedagogic implications and are not just motivational elements.
The important point here, is the point made by the 69 scientists who orginally criticised the Luminosity product and claims, that any activity by the brain can improve performance but that does not give gamification an advantage. In fact, the cognitive effort needed to master and play the 'game' components may take more overall effort that other, simpler methods of learning.
Lumosity have form
Lumosity are no strangers to false claims, based on dodgy neuroscience, and were fined $2m in 2015 for claiming that evidence of neuroplasticity, supported their claims on brain training. There is perhaps no other term in neuroscience that is more overused or misunderstood than 'neuroplasticity' as it is usually quoted as an excuse for going back to the old behaviourist 'blank slate' model of cogntion and learning. Luminosity, and many others, were making outrageous claims about halting dementia and Alzheimer’s disease. Sixty seven senior psychologists and neuroscientists blasted their claim and the Federal Trade Commission swung into action. The myth was literally busted.
Pavlovian gamification
I have argued for some time that the claims of gamification are exaggerated and this study is the first I’ve seen that really put this to the test, with a strong methodology in a respected peer reviewed journal. This is not to say that some of aspects of gaming are not useful, for example its motivational effect, just that much of what passes for gamification is Pavlovian nonsense, backed up with spurious claims. I do think that gamification can be useful, as there are DOs and DON'Ts, but that it is often counterproductive.
Conclusion
The problem here is that, in the case of Lumosity, tens of millions are being 'duped' into buying a subscription product that has no real extra effiacy over other methods. Similarly, in the e-learning market, people may be being duped into thinking that gamified product is intrinsically superior to other forms of online learning - when it is not. You may be paying premium price for a non-premium product that has no extra performance efficacy.