I was stuck on a piece of writing, specifically the conclusion of an essay, so I decided to use two LLMs, but not in the way you might be thinking. Douglas Rushkoff recently spoke in one of his podcasts about using AI to ask where his stories might go, not in the hope of using the answer, but rather to see if the plans he had were too generic or familiar. So, seeing the versions of the ending of the essays that AI’s gave me, it conversely gave me a good indication of where not to go.
One thing I noticed was that the AI keeps using similar strategies for tying things up. For instance, it’s really fond of grabbing opposing positions and the finding some kind of resolution, but rather then being a bold synthesis in the Hegelian sense, it tends to be a kind of deflation, a way of back-peddling from the point of conflict to find so generic common ground. This actually helps to remind me of what is so powerful about dialectical reasoning, there’s a sense of powering forward into new territory from the violence of opposition rather than falling back to the familiar as a means of resolution. In a way, the synthesis offers no resolution, it only offers forward momentum until the next schism arises.