Feeling More and More like a Computer

He who studies computers might take care lest he thereby think like a computer. And remind your fate is gazing back to you.

Posted by Gnefil Voltexy on 2023-01-15
Estimated Reading Time 9 Minutes
Words 1.5k In Total
Viewed Times

No technical project analysis and no knowledge are shared today. Let’s chill and chat a bit.

Recently, I had an inexplicable feeling that my thinking pattern is becoming more and more like a computer. To be precise, the way I think about things is becoming more and more like the way a computer handles problems. What do I mean, please read on.

I don’t know when I started to realise that I tend to use a backward deduction approach (abductive reasoning) to analyze problems. What does that mean? Start with a problem already solved, and trying to retrospect the initial factors, and then step by step realise which stage was decisive for the success of the solution.

For example, a few days ago I wanted to book a plane ticket, but my luggage’s weight might be out of the limit, so I gave up booking that day and planned to see if I could buy a suitcase the next day (I was asked to say how many suitcases I plan to bring when I wanted to buy the ticket). So, I didn’t buy the tickets that day and wanted to try buying a suitcase the next day. Then the next day I was very lucky to find and then bought a small and fancy suitcase, but when I went to buy the ticket that next day night, the price of the ticket doubled…

I couldn’t help thinking, “What was the wrong decision? What could I have done better? Is there a better solution for the next time I encounter the same situation?”

Have you noticed it? At this moment my head has already begun to automatically process the information, like a function that was automatically called.

Let’s pick up from the previous paragraph and start working backwards. Since I bought it twice as expensive today, I should have bought it yesterday. So which bit of my decision-making yesterday determined this blunder? I got it, it was when I saw the page that said Choose your suitcase and quantity, I was not confident enough with the luggage. On the one hand, I wondered if I had underestimated the weight, and on the other hand, I was afraid that I wouldn’t be able to buy a suitcase tomorrow but had bought an extra suitcase slot. Now that I think about it, what was the possible solution to break the game in the first place? At this point, I naturally skipped the brute search for a feasible solution (that said, every normal human would not use the brute force search to think of a solution, right?). However, I also skipped all the uncertain factors, which could had came up or not at that time, because I wanted to be certain about the solution. So I took these two questions as the initial point and “radiated” around its relationship line to find any feasible solutions. At least the number of relations is limited. They consist of relations like similar, opposed, complementary, subclass to, superclass to, elevated abstraction, sunk implementation, and so on. So, it is possible to retrieve all these relations. Of course, the reality is that I have the advantage of knowing the future outcome (I'm working backwards) and so can often quickly find the most obvious line of relationships towards an effective solution to the problem.

The solution I retrieved from this suitcase problem is the type of ticket. I could have made it a fully refundable ticket. That way if I don’t end up with a suitcase, I can get the ticket back and buy it again (if that’s less expensive). Furthermore, regarding the estimate, I could have measured the weight before. Estimating the weights pulled out of the luggage and the weight to be put into the luggage earlier, I could have calculated an approximate final weight in order to conclude that I needed another suitcase and bought it earlier. Well, this is the end of my backwards deduction and my journey.

As I told the story the way I thought about it, it went very smoothly. But! Things are not that easy. We may want to abstract the perspective to a higher layer. It’s easy to see the “bugs” in my way of thinking. First of all, it is reasonable for me to exclude all brute force retrievals, but I also exclude uncertain factors. This naturally precludes any out-of-the-box thinking or any sudden intuition on my part. The biggest problem with this trap is not just that I am missing possible solutions. The biggest problem is that I will subconsciously reject any factors that do not directly relate to the problem in my future problem-solving. In other words, I would completely ignore factors outside the problem conditions and be completely confined to the problem.

Secondly, because I am working backwards, many of the outcomes I know in the future are not available at the problem time. Then this will form a paradox, although I can back-propagate from the future outcomes to find the easiest solution through the line of relationships, how do I realise this is the line among all these logical relationships? I had ruled out brute force inspection, so how was I going to find this specific line of relationship that would drive me to victory? Despite the number of relationship lines being limited, the number of lines will increase as the distance from the main condition increases. For example, in the case of the “suitcase problem”, I needed to find the “tickets” clue in the main question. followed by “reduce the risk of buying the ticket”. And then I had to find the option on the website that matched this new condition, which was “refundable tickets”. So look, there are quite a lot of lines of relationship going there, not to mention the issue of efficiency and revenue: if you spend a lot of time on this you may just buy an extra slot and be done.

Finally, the last bug, which is one difficult capture in one event if not saw the whole picture. That is, I fall into this line of rigid backward thinking due to inertia high frequently in my daily life. Without realising it, I no longer know how to accommodate the uncertainty in my methodology. As a friend of mine once said, it’s not about being creative when you know nothing, it’s about being surprisingly innovative despite having built your palace of knowledge.

Interestingly, I still rely on this solidified mode of thinking while doing such an analysis here. So even though I want to reflect and analyse my way of thinking just like this, the analysis still sounds like a kind of computer analysis.

As the amount of knowledge continues to increase, my thinking is becoming more and more set in stone. There is no problem with this statement though because as more knowledge pours in, they tend to accumulate on each person’s model for understanding the world - worldview. Whenever logical knowledge is added and when it does not conflict with the existing worldview, then the knowledge is absorbed and the existing model is thickened, (like a building block model). If it is new and conflicting knowledge the ones breaking in, we have to measure the impact of the new knowledge on the existing worldview. This happens in one of three ways. One is that the worldview extremely excludes this new knowledge, in other words, it is completely irrational (for this person), and that’s when the knowledge is kicked out of the model world. The second one is when the new knowledge is given more weight upon reflection than the conflicting knowledge in the original worldview, then the old knowledge is the one that gets kicked out at that point. The third is where neither knowledge can be accurately removed, and the result is that both are valued with their feasible weights, blended or retained until a new framework can explain them both.

Building Blocks

Why do we suddenly start discussing worldviews, I want to illustrate a closely related but not identical concept to worldviews. If a worldview is compared to the accumulation of acquired knowledge, the database of knowledge, then the way people acquire knowledge, the way they think about knowledge is - methodology. Here to draw out the point of these two paragraphs, in the process of learning, the thickening of the worldview will expand the boundaries of cognition, but with the expansion of knowledge, the methodology will also be solidified at the same time leading to rigid thinking. An analogy between this concept and a rigid worldview can be drawn. A rigid worldview is a refusal to update the worldview for any kind of external information, while rigid thinking is the refusal to update the methodology. While I was learning computer theory, what I studied and how I thought converged the same. So in general, yes, I have a more extensive knowledge base than before, but the way I think is becoming more and more solidified, more and more like a computer.

Scaffold of Blocks