Highest Rated Comments


Tsundokuu111 karma

Heideggerian a.i. Is a way of approaching models of cognition which are not computational. Most a.i. research is explicitly cartesian, in other words, it believes that cognition arises in the following manner: a system receives information from its environment by way of apparatus which are sensitive to specific stimuli (much like our eyes are sensitive to light but not soundwaves) - this information is called 'brute facts', i.e, directly sensible information by way of some mediating organ or hardware.

The cartesians believe these brute facts can, by way of proper algorithms, be assembled into 'meaningful, conceptual information', from there its pretty simple- if the machine has evaluated the meaning of the brute facts (these red particles = the surface of a balloon) - then the machine can make judgments on how to act and behave. A further premise is that these judgements are not just circuits firing, but if sufficiently complex 'emerge' as conscious experience of the world - but this is just an unrelated premise.

Heideggerians think this model is absurd because of problems like 'the frame problem' or 'common sense problem of knowledge'. You see, what we call common sense is actually an extraordinary epistemological faculty which a.i research is still baffled by. When we are faced with certain situations, we have a unique ability to make quick judgments about the situation without first evaluating an enormous amount of contingencies. When a human enters a situation, they immediately notice the relevant facts of the situation which bear on how to proceed and judge what is to follow. A heideggerian would say that we notice what is significant about the situation without having to subconsciously process piles of data about the environment. How exactly humans are capable of doing this is very difficult to explain- but its all there in heidegger's book 'Being and Time' if you feel adventurous.

Let me give you an example. Suppose we enter a room, and I ask you - is this someone's living room, or are we in the dining hall of a restaurant? The cartesian machine would have to begin processing information about the environment, and then running that information through algorithms in order to make judgements (if x, then y). A cartesian machine might first ask, are there any tables in this room? You might say, yes - but a living room and a restaurant dining hall would both have tables in them, so we don't learn much from asking this.

You might then ask, well, how many tables? A private dining room will only have a few, while a restaurant might have dozens.

I'll grant this seems plausible - but is it indisputable? What if we are in the private dining room of a very rich person? Does it make sense to say 'if a room has more than four tables, then we are certainly in a restaurant.' Why four? Why not five or six tables? What number would truly be significant enough to indisputably differentiate between living rooms and restaurants? In some cultures- restaurants don't even have tables!

A clever programmer will then try and seek and test an infinite number of measurements that a machine could use to inferentially or deductively evaluate whether a room is a private dining room or a restaurant. 'Are there any menus lying around?' A private dining room won't have menus, right?

Perhaps not. I might take a menu home with me and leave it on my dining room table. I may receive them in the mail. I may be a graphic designer with boxes of newly designed menus sitting on my dining room floor, waiting to be shipped to a client!

We could go on infinitely discussing 'algorithms' for deducing and making inferences about what sort of room we are in. No doubt an annoying philosopher will always provide hypothetical counter examples to whatever the cartesian programmer will believe to be an infallible argument. - but even then, we are missing the point, the point is, humans don't ever have to, or ever find themselves - processing such questions. Whenever we are in a situation, we almost always take notice of what is most significant in our environment, and ignore the mountains of other information an artificial intelligence would have to process first before reaching a conclusion. We are always uniquely attuned to the correct frame of thought, and this is thanks to our skills, what our cultures value, and the normative forces of society, such as the correct way to stand in an elevator or how close is too close when standing and speaking to a stranger.

A.i research has been struggling with the frame problem since its inception. Heideggerians think it will never be solved until we abandon our cartesian models of cognition. For a really in depth paper on the limits of cartesian a.i and the strengths of heideggerian solutions, read this paper by hubert dreyfus

Edit: wow thanks for the gold kind stranger. :) This comment certainly did not deserve it, I wrote it after a 12 hour shift at work and felt too tired to really clean up my grammar or arguments. I do not think I did Heideggarians any justice here because lots of people seem confused as to what Heideggarian A.I. even is. If you are one of those people, please, read the paper I linked at the end. I promise you its a very scientific & academic paper, not a blog post - and won't be a waste of your time.

Tsundokuu70 karma

This is kind of tricky because how do you tell a computer what to pay attention to and what to ignore? This is not very easy, but folks in AI field are working on this.

I think you may be massively understating this. As you undoubtedly know yourself, this is called the 'frame problem', and a.i. research has been working on this problem for almost 50 years now without any progress. So it's misleading to say 'we are currently working on it' as if this is a new focus or recent development in research.

Do you have any opinions on Heideggarian A.I.?

Tsundokuu4 karma

Awesome and incredibly informative response! Thank you so much :)

Tsundokuu-9 karma

Jordan Peterson is human garbage.