Humans before Users
We are humans before we are users. Every human on earth is wired differently. To that effect, every human uses software differently. They may follow the same breadcrumb trail, weaving through different window panes of a given GUI but in between something else happens.
It is something intrinsic to being human. Something which overlaps differences in decision making, interpreting information, assuming liability, handing over personal or intimate information - all of this and more has a significant effect that cannot be easily quantified as “user” behaviour.
Experienced sales team reps, customer success managers, UI/UX designers, software engineers, and creative technologists know this all too well.
When creating Software, Humans must come first
Yet, recently this topic of discussion has gone astray. I am glad that the new Netflix documentary The Social Dilemma has slapped people on the wrist but soon I fear, Netflix will become its own worst enemy, an absolute line Social Media platforms and Search Engine (SE) monopolies have already taken.
The intersection between human and machine, the personal relationship we as humans share with software and technology, is hauntingly prevalent. Technology is an integral feature of our lives.
We use our devices to conduct most activities every single day. We have a new-found power and self-confidence in using modern technology and subsequently, this has laboured new standards of human rights and vigorous regulatory measures.
Much of which is unsatisfactory and the wider public may not be aware of. Especially for those unaware of Artificial Intelligence (AI).
Some interesting thought experiments. Should we grant AI a legal personality? AI can use its “initiative” so does this mean we should make AI liable for its behaviour? Who owns the information generated by AI or mechanisms of AI?
Is it the operator, the vendor, the supplier, the user? What happens if “information-hazards” generated by AI are not regulated by SE or Social Media monopolies? How should AI make the right choices? How should it differentiate between what is wrong and what is right? Does it do this already?
These are simple questions with very difficult overlapping and intricate answers. But what is certain, is that answers lie in a continuous dialogue between those in power or of authority and with us, the users - humans.
Our society will undergo a significant universal change from the integration of AI. This will introduce good and bad changes. How prepared are we for such change?
The answer: very little or perhaps, only the big and the best are prepared. Understandably, the creators and operators of AI are the ones who can afford to accommodate such change and disruption.
Besides this, sensationalist AI narratives such as “the greater will of AI”, its “advance and rapid undertaking of society” have been forced upon societies by mass media outlets and social media too thoughtlessly and too loudly. It’s become exhausting for many.
The potential effects of AI are far-reaching. This is due partly because AI affects all aspects of our physical, mental, social and moral lives and most people find these changes too challenging to think about. Still, AI is here to stay. Therefore, the motto “we are humans before we are users”, I think, succinctly conjures up my beliefs on AI and its relationship with humans of the future.
We have a great opportunity to define and establish new councils, institutions and regulators, clearer legislation, and more definitive “user” rights that accommodate the human.
Now is the time to define and think carefully about how we can live alongside AI and its societal influence before it is too late. To do this will require worldwide agreement and worldwide enforcement that is transparent, neutral, logical, cross-disciplinary and most importantly, multi-generational.