Artificial intelligence (AI) is becoming a critical part of our society. This places a lot of responsibility on the shoulders of AI engineers. I believe that it is both desirable and possible to turn AI engineering, and software engineering, into a true profession.
To start, I believe that Software Engineer in general should be raised up to be a profession. Unfortunately, we are still in the “wild west” phase of the software profession with no end in sight. I had hoped that the Y2K crisis, over 20 years ago now, would motivate a move towards professionalism but it never happened. Even a common and expensive problem like that didn’t motivate governments to crack the whip, and as a community few people got on the professionalism bandwagon being promoted by various groups such as IEEE, ACM, BCS, CIPS, and others. So, I don’t see much hope for this happening in my lifetime.
Perhaps we have a shot with AI Engineer. There is currently a focus by governments, and non-government agencies, around the world to develop some form of ethics for AI and AI regulation. A lot of this motivation is born of fear and misunderstanding. Not what I would consider this ideal but I’ll take it. Furthermore, if we get “lucky” and there is a catastrophe resulting from inappropriate application of AI-based technology then we’ll have an opportunity to leverage the resulting outrage to motivate a movement towards real professionalism. The Tacoma Narrows Bridge collapse (if you’ve never seen it, the video is incredible) was a catastrophe that motivated greater professionalism within the civil engineering community.
My hope is that AI Engineer becomes raised up to be a profession, hopefully providing a meaningful first step towards wider professionalism for software engineers. Let’s examine the criteria for professionalism and why this would be a good thing for AI Engineers:
- Recognized professional body(ies). There is no professional association that is recognized as a professional body suitable to certify and govern AI Engineers. Such a body would define, evolve over time, manage, and govern the other criteria that follows.
- Requisite knowledge. There is certainly a significant amount of knowledge required by AI engineers. But, there isn’t an agreed to body of knowledge, or even a definition of what that would contain, against which education and supporting credentials could be developed. A professional body would define the requisite knowledge needed for AI Engineers, potentially providing guidance to both AI education providers as well as students.
- Long period of study. Given how fast the technology and marketplace evolves, AI Engineers will require continuing professional development (CPD) to keep up. A professional body would govern this at a minimum, and perhaps even offer such training or at least govern the organizations offering it. It’s easy to say that people need to do the work to keep up with their changing profession, it’s another thing to say that you’ll lose your credentials if you don’t.
- Credentials. Education organizations, such as University of Leeds, currently offer courses and credentials in AI. However, due to a lack of a professional body those offerings are not validated by a recognized authority. It would certainly help AI practitioners to be able to easily identify which education providers provide offerings that get you towards recognized credentials. Interestingly, some organizations such as the Vector Institute for AI are performing validation. Perhaps they will evolve into an AI professional body one day?
- Self-regulation. Currently self-regulation within the AI industry is effectively at the personal level and employer level. Without a recognized professional body, we will not get to the country or international levels. It would behoove AI Engineers to have the respect of employers.
- Serving the public interest. There is a lot of good and interesting talk about this, but in general most AI endeavours seem to be motivated by a profit mindset rather than a societal good mindset, and at best societal good seems to be an unintended side effect in most cases. Given that we know of society-level externalities, we should want to get ahead of them and ideal avoid the negative ones.
- Code of conduct. Once again, it’s up to us as individuals or as employers. The last thing that AI Engineers want is to be tainted by the actions of one or two bad actors.
You may find some of my other blog postings about artificial intelligence to be of interest. Enjoy!
1 Comment
Tom Gilb
There was no Sw Eng action and there will be no action on AI Engineering. But you are right! Competitive Engineering Book free
http://www.gilb.com/dl541