I’m reading Our Final Invention by James Barrat right now, about the dangers of artificial intelligence. I just got to a chapter in which he discussed that any reasonably complex artificial general intelligence (AGI) is going to want to control its own resources: e.g. if it has a goal, even a simple goal like playing chess, it will be able to achieve its goal better with more computing resources, and won’t be able to achieve its goal at all if its shut off. (Similar themes exist in all of my novels.)
This made me snap back to a conversation I had last week at my day job. I’m a web developer, and my current project, without giving too much away, is a RESTful web service that runs workflows composed of other RESTful web services.
We’re currently automating some of our operational tasks. For example, when our code passes unit tests, it’s automatically deployed. We’d like to expand on that so that after deployment, it will run integration tests, and if those pass, deploy up to the next stack, and then run performance tests, and so on.
Although we’re running on a cloud provider, it’s not AWS, and they don’t support autoscaling, so another automation task we need is to roll our own scaling solution.
Then we realized that running tests, deployments, and scaling all require calling RESTful JSON APIs, and that’s exactly what our service is designed to do. So the logical solution is that our software will test itself, deploy itself, and autoscale itself.
That’s an awful lot like the kind of resource control that James Barrat was writing about.