There’s a big problem with most debates over technology. Whenever discussions about genetic engineering, future energies, nanotechnology, applied neuroscience, etc. come up, it’s assumed that EITHER you belong in a group that embraces any kind of technology, especially if it's shiny, OR you belong in a group (often, very unhelpfully labelled ‘Luddites’) who’d like to reject all technology beyond the Stone Age and live in a cold, dripping cave knapping flints.
(Please don't misunderstand the last point: I'm well aware of the sophistication of Stone Age/traditional cultures, and think that we have things to learn from them. It's just that in this sort of debate it's assumed that there are two and only two sides, and they're polarized).
I’ve never really identified with either of these extremes, but have instead tried to evolve a realistic view about the kinds of risks and promises that new technology brings. One thing that’s become very clear to me is that progress cannot mean technological progress only.
Take the Luddites. Firstly, the fact that many Luddites were hanged (including a boy of twelve) often gets papered over in these discussions. Second, the Luddites did not reject technology out of hand, but they were concerned about its alienating and disempowering effects.
I’ve read a lot of stuff recently about how the coming age of robotics and how AI will free us from work. Well, I think it depends how it’s done. If AI is introduced into factories and other workplaces, making lots of workers redundant, then it is plainly not going to benefit anyone but the employers.
If, on the other hand, AI is introduced at the same time as social and political reform (say a guaranteed national income, and/or with a mind to using AI applications to enable small business and single workers) then it might liberate us from punishing working hours, for at least some jobs.
This example suggests that expecting a new technology automatically to ‘save us,' to raise living standards or take away the pain of toil is very naïve. So my position is similar to the one Nicholas Agar outlined in his book The Sceptical Optimist (Oxford, 2015).
Agar surveys the debates, and concludes that ‘declarations that technological progress is good or bad may be effective as rallying calls,' but do not provide us with a way of making informed choices. Instead, the benefits and dangers of new technology should be intelligently balanced against each other.
I think he's right, but it seems to me that the main problem here is that the stance that individuals and various factions take on technology is more often dictated by shared values than by a balancing of danger vs. opportunity.
So a Transhumanist will tend to embrace any kind of human-enhancing technology, simply because it is high technology, but a Deep Ecologist will tend to reject things like GM crops, nanotech and nuclear power.
I think that in practice it's pretty much impossible to make judgments that are divorced from your values. So maybe if you really want to make an informed judgment about technology, you need to figure out what your values actually are and be honest about them.
And my own values, these days, tend to revolve around whether a technology will genuinely enhance well-being and the health of the planet, as opposed to assuming that any innovation, for its own sake, is 'progress' and will magically solve all our problems.