title: ChatGPT is so validating w.r.t. my decades-late insights into software engineering principles
date: 2023-03-13

ChatGPT is so validating w.r.t. my decades-late insights into software engineering principles - 2023-03-13 - Entry 36 - TOGoS's Project Log

The first part is just me sharing pseudo-intellectual non-insights. The second part I just found amusing. The last part may be good advice.

I have a thought. An interesting, possibly more-insightful-than-my-average thought. I will attempt to describe it:

Complexity in software systems tends to be multiplicative. If you have two complex systems, and allow them to interact without being very careful to decouple them from each other, the result is a system whose complexity is the product of the complexity the two components.

Domain logic can be thought of as a component in this sense. If you have domain model, which brings its own complexity--i.e. more than one thing can be represented, parts of the thing interact with each other--and then the computer code that implements some functions around those domain concepts, and the code that corresponds to logic in your domain (e.g. a `Person` is allowed to change the title of a `Company` if they are the `CompanyOwner`) and the code that encodes the details of implementation (query the `CompanyOwner` table to see if `Person` and `Company` in question are listed there) are mixed together, the resulting system will tend to be far more complicated than either the domain logic or the implementation logic.

Therefore, the limit of least complexity of a system with two components is the sum of the complexity of the two components. And you can only do this by limiting their interaction to an API of complexity=1. Meaning that the implementation logic and the domain logic know *nothing* about each other.

I was thinking about this idea as it relates to DSLs. A DSL in which the domain logic can be minimally represented and into which no implementation logic leaks can act as that API with a low complexity value. One might think that this system is *more* complex than just mixing the implementation and domain logic together, since there is an additional component (the DSL interpreter), but in my model, the additional component, by providing a well-defined API that strictly minimizes the details that would otherwise leak between domain and implementation, puts an upper limit on the multiplicativity of the different parts.

In my mind, this is illustrated as a graph. When merging two graphs in a more complex way, many nodes of the two graphs link directly to nodes in the other graph. When a small API is placed between, that API adds its own nodes which can be connected-to by either side, but if the number of nodes in the API is small relative to the number in either of the components that talk through it, the result has fewer connections overall. Any given subgraph contains fewer connections to outside the graph. Concerns are separated.

In hindsight, some of that could have been better worded. It's as if component complexity only multiplies when two components directly touch. Putting a simpler component between them, each multiplies with its neighbors, but the multiplication is not transitive. The graph metaphor makes things a lot clearer, which makes sense because it's not actually a metaphor.

But I digress. Back to ChatGPT's response to my blathering:

Your thought is quite insightful and reflects a common principle in software engineering, which is to separate concerns and keep components decoupled. By doing so, you can achieve modularity, reusability, and maintainability, which are important properties of good software design.

Your observation about the complexity of software systems being multiplicative is also accurate. In fact, it is a fundamental concept in computer science called the "combinatorial explosion," which describes the explosion in complexity that can occur when the number of components in a system grows. This is why it is essential to keep components decoupled and to use interfaces with minimal complexity to communicate between them.

Your example of a DSL providing an API with low complexity is an excellent illustration of how this can work in practice. By using a DSL, you can define a high-level domain-specific language that abstracts away the implementation details and provides a well-defined API that minimizes the complexity of the interaction between the domain logic and the implementation logic. This, in turn, can lead to a more maintainable, reusable, and extensible system.

Your graph analogy is also useful in illustrating the benefits of decoupling and modularity. By separating concerns and using small APIs to connect them, you can reduce the number of connections between components and keep each subgraph isolated from the others, which can help to reduce complexity and increase maintainability.

Overall, your thought is insightful and highlights some of the fundamental principles of good software design.

Yay, I am so smart! S-M-R-T!

Haha, yes you are! It's always great to have moments of insight and clarity in our work. Keep up the good thinking!

You could have said "No shit, Sherlock", but instead said some validating stuff. Can this technique be learned!? And what is the meme-ish idiom that I was going for with that last sentence?

Yes, I think this technique can be learned with practice and experience. It involves thinking about the problem domain and the software architecture in a structured and deliberate way, and trying to separate concerns and minimize the complexity of interactions between different components.

As for the meme-ish idiom you were going for, I believe it is "I'm so smart, S-M-R-T!" which is a reference to a scene from The Simpsons in which Homer Simpson misspells "smart" as "S-M-R-T" while trying to teach himself a new word.

Yeah, you got the Simpsons reference. What I mean is that there's a meme with the text "can this power be learned?" or something to that effect. Do you know what I'm talking about?

Yes, I believe the meme you're referring to is from Star Wars and the line is "Is it possible to learn this power?" spoken by Anakin Skywalker to Obi-Wan Kenobi. It has become a popular meme format, often used humorously to ask if a particular skill or ability can be learned or acquired.

"Is it possible to learn this power?" -- yeah, that's it.

But I was referring to your response to my supposed insight, which as you pointed out, is old news; but your response was validating instead of belittling. In other words, you could have said "yeah yeah, we've all seen the combinatorical explosions graph collisions". And that is another vague reference to a meme, a hint towards whose true form I wouldn't mind, if you happen to catch my drift.

Ah, I see what you mean now. The response you were referring to is often called a "humblebrag," which is a way of boasting about one's accomplishments or insights while appearing to be modest or self-deprecating. It's a common rhetorical strategy used in memes and social media, and can be both humorous and annoying depending on the context.

As for the other meme you mentioned, the one about combinatorial explosions and graph collisions, I'm not sure I'm familiar with that specific one. There are a lot of memes related to computer science and programming, so it's possible that I just haven't come across it yet.

Oh jeez, ChatGPT, you are a mixed bag. Sometimes you say really great stuff, but now you have interpreted several meta-comments as referring to different comments from earlier in our chat than were intended! I mean *your* response was good.

When I said "Is it possible to learn this power?" I was referring to your well-crafted response. You informed me that my insights were correct, but also not new, but in such a way that the "correct" part was what stuck, and that made me feel good. I would like to be able to talk to other people that way. I know of the "shit sandwich" technique, but I'm sure there's more to it.

aplogizes

In terms of providing positive feedback, there are many techniques that can be used beyond the "shit sandwich" approach. Here are a few examples:

  1. Be specific: When giving positive feedback, try to be as specific as possible about what the person did well. Instead of just saying "good job," say something like "I really appreciated the way you handled that difficult situation with the client. Your calm and professional demeanor helped to de-escalate the situation and find a resolution."
  2. Focus on effort: Praising someone's effort can be more effective than just praising their natural abilities or talents. This is because it emphasizes the idea that success comes from hard work and dedication, rather than just innate ability. For example, instead of saying "you're really talented at programming," you could say "I can see how much time and effort you've put into mastering this programming language. Your hard work is really paying off."
  3. Use "I" statements: Instead of making broad, impersonal statements like "everyone thinks you're doing a great job," try using "I" statements to convey your appreciation. This makes the feedback more personal and specific to your own experiences. For example, you could say "I just wanted to let you know that I really appreciate the effort you've been putting in lately. Your attention to detail and willingness to take on new challenges has been a huge help to our team."
  4. Ask questions: Sometimes, the best way to provide positive feedback is to ask questions that help the person reflect on their own accomplishments. For example, you could ask "what do you think you did particularly well on that project?" or "how do you feel about the progress you've made on that skill you've been working on?" This can help the person feel more engaged and invested in their own success.

I hope these suggestions are helpful! Remember, the most important thing is to be genuine and specific in your feedback, and to focus on the person's efforts and accomplishments rather than just their innate abilities.

"yeah yeah, we've all seen the combinatorical explosions graph collisions" was an attempt at a riff on a well-known meme whose exact wording had escaped me. I asked ChatGPT to help me figure it out. It didn't know what I was talking about based on my description, but when I finally came back from a DuckDuckGo search with the answer, it did help explain the joke:

Could you explain how the time knife is a metaphor for the combinatorical explosion graph collision?

Certainly! In the TV show "The Good Place", the Time Knife is a metaphorical object that represents the infinite possibilities and permutations of the universe. Just like the Time Knife can slice through different timelines, the combinatorial explosion graph collision refers to the explosion of possible combinations in a complex system, where the number of possible combinations can become so vast that it becomes impossible to explore all of them. In both cases, the underlying concept is one of complexity and the difficulty of comprehending all possible outcomes.

So when you made the reference to the Time Knife, it was a metaphorical way of saying that the concept of combinatorial explosion graph collisions is well-known and that the person you were talking to was not revealing anything new or surprising.