The Ethical Challenges Facing the Development of Powerful Technologies
This essay will discuss the issues in developing technologies such as Artificial Intelligence or autonomous vehicles and the consequences that such developments will cause. In doing so, this essay will raise the potential complications from such progresses in technology and in particular discuss the ethical decisions and challenges that will have to be solved.
One such ethical challenge that engineers face is in deciding when technologies that could potentially cause death should be released. As illustrated by the most recent tragic event which occurred in Arizona of the first death caused by a driverless vehicle, the questions posed by these technologies are becoming incredibly real. The 49 year old Elaine Herzberg was struck by a Volvo SUV equipped with the driverless technology travelling at around 40 miles per hour while walking on the side of the road. This incident shows a clear failure in engineering, but perhaps more obscurely, shows a failure in regulation. The company that designed the vehicle, Uber, was encouraged to test their vehicles on the state’s roads and indeed just two weeks before the incident, the Arizona governor Doug Ducey had issued an executive order which allowed fully driverless vehicles on Arizona’s roads. Furthermore, it was revealed that the vehicles were far from ready from going on main roads. Indeed, Uber was struggling to meet their target of 13 miles per human intervention while their main competitor, the Google spinoff Waymo, was averaging 5600 miles before needing an intervention. This shows an example of a case when a technology should not have been released to the public domain. One may see quite clearly why Uber and the Arizona government were in the wrong here, but it can become quite difficult to see at which point should a piece of technology be realised. No technology, especially as complicated as autonomous vehicles, can run perfectly 100% of the time and so there will inevitably always be cases when errors occur and so lead to death. At which point then, should regulators allow a particular technology be used in public, knowing that there will always be a potential for death. A solution that has been proposed by many in the case of autonomous vehicles is to consider whether the car is a better driver than actual humans. In such a case as when the vehicle is indeed better at driving, then even if some deaths occur, one can be rest assured in the fact that if humans were driving then far more deaths would have occurred. The same thinking can be applied to technologies such as medical robots.
As technologies such as Artificial Intelligence and medical robotics develop, another ethical issue that rises is that of the social impact of allowing these technologies to develop. Technologies such as Artificial Intelligence and autonomous vehicles all have the potential to seriously disrupt the livelihoods of many people. Autonomous vehicles will seriously affect the logistics and ride-hailing industries if released and AI has the potential to make most jobs redundant. However, these technologies will also bring us many benefits such as increased efficiency and safety. For example, autonomous vehicles could eradicate over a million deaths annually in traffic accidents that are due to human error. Furthermore, with the help of these technologies, global produce can only increase due to additional efficiency. This should, in turn, make way for new and so far not thought of jobs to emerge. This is, however, taking a rather positive view on the matter. It is quite likely that new jobs do not emerge, and that the wealth inequality divide merely enlarges with those able to take advantage of these technologies earning all the money while causing mass unemployment. Even so, it is still possible for AI and similar technologies to have a positive social impact as long as governments take care in managing how wealth is distributed. Indeed, this may lead to a complete restructuring of our economic and social systems, an event that would be larger than even the industrial revolution.
Another issues that must be solved is how technology should make ethical decisions. This is becoming increasingly important as technology becomes more powerful and able to affect the real world. One such example of an ethical decision that will have to be made is whether an autonomous vehicle should chose to save the driver or the pedestrian’s life in an event when the death of one is inevitable. The issue with ethics is that they are fundamentally subjective as each person has their own definition of what they think to be ethically correct. The difficulty, then, lies in deciding who should code in the ethics for a technology. One option is to simply leave it to the engineers who designed the technology. However, that particular engineer may follow certain ethics which are in conflict with the ethics of the majority of the world. Though the ethics of the engineer may not be objectively wrong, it is certainly less correct. The engineer may also be more interested in maximising profits than in following good ethics. Another option may be to leave it to governments or even the general public. However, lawmakers and the average member of the public are most likely not going to be the most educated about the subject and so also should not be trusted to make the most logical decisions. The final option then, is to delegate this role to external organisations that specialises in making these decisions. This seems to be the best option that has been offered so far.
To conclude, the ethical challenges facing the development of powerful technologies are varied and incredibly complex. To solve these, both engineers and governments will have to seriously consider the implications of technologies before developing or releasing them. It is sometimes thought by some that technological advances should occur without any consideration for the ramifications. After all, technological advancement in itself is ethically neutral and it is the way it is used which creates ethical implications. However, it is difficult to escape that if engineers and scientists in the past had considered the consequences of their research, then we would not have weapons of mass destruction today.
Cite this Essay
To export a reference to this article please select a referencing style below