There my main character was, sobbing on the floor, having gone through hell tailored by a computer program, designed to make her confront the very thing she didn’t want to confront. She was done. She had nothing left to discover. She knew all that she needed to know about herself. Yet, why couldn’t she cope? Why did the tears still fall?
It was then I departed from the rails of my outlining process (which isn’t restrictive, but it is an outline) and had the character ask, plead even, with the program. Didn’t it feel anything for the pain she went through? Did it feel anything at all for her plight? Or did was it really an uncaring bit of cleverness, designed to be brutally efficient and cruelly honest at all costs?
At this point, I didn’t know what came next, but I was going somewhere. I typed answer after answer, scene after scene. None of it worked and was worthy of the question before us. One response was five pages of pure speculative goodness, an insight into future artificial intelligence programing born from an understanding of advanced heuristics and neural networks.
But it was missing something. I deleted that, too.
Finally, a week later, I sat down and reached for my poor main character. She deserved an answer.
“Lexus, there is no cure for the human condition,” the program told her.
That was not the answer she expected. She (and by extension, I) wondered if the program was still running. Still trying to make her see through the fog of humanity by stripping it away and replacing it with a program of its own.
Or, was it using logic to reach her, to tell her that bad things happen to good people, that in its own broken way, it did care. It cared a great deal. It cared more than it could say.
I don’t have the answer to that, just as Lexus doesn’t have the answer to being human.
Maybe someday we’ll both find out.