Highest Rated Comments


KarenRei108 karma

Of Consumer Reports' 17 categories, Model 3 ranked perfect (5/5) in all but three categories:

https://pbs.twimg.com/media/D0CfPFAWwAYiMBx.png:large

* Paint/Trim received 4/5

* In-car Electronics received 4/5

* Body hardware received 3/5

Yet nonetheless, you ranked the car 2/5 and not recommended.

People have been scratching their heads trying to explain these extremely counterintuitive results. The Tesla Podcast speculated that perhaps you're "carrying forward" reliability results from earlier Tesla models and in effect overriding the actual data. Another speculation was that you received some additional, later data which was worse than the data you present above.  This leads to the obvious questions:

 * Why do you feel it appropriate to "carry-forward" results for vehicles that share virtually no hardware with the Model 3 and are on a completely different platform, and use that to override actual data?

 * Why do you think it's acceptable to have "secret data" and/or counterintuitive, unexplained results?

KarenRei79 karma

Your decision to de-recommend the Model 3 - released just before TSLA options expiry - caused the stock to plunge ten points and caused significant losses for investors. Not everyone was so unlucky, however. One person - seemingly inexplicably at the time - bought 8,600 $295 PUT contracts at $1.90, only 2 days prior to expiration. The "inexplicable" trade suddenly became explicable, because shortly thereafter your report came out. If they sold near the end of the day, they earned over $2 million dollars on that trade.

We know that you informed numerous news sources that you had upcoming news. It appears, on the face of it, that your news "leaked" from one of them, and allowed a short seller to profit off of everyone else's misery. Do you feel any guilt over this? Do you accept that your ratings changes are material nonpublic information? Do you have any plans to tighten your communications policies?

KarenRei63 karma

If I may sum up what Consumer Reports has said:

 * Model 3 did well in our testing.

 * Buyers love it more than any other car's owners love their cars and overwhelmingly recommend it again.

 * The car is safe. [One may add that it's not just safe, it got the lowest VSS (combined probability of injury) score ever received by any car in the history of the NHTSA]

Therefore...

 * Don't buy it. Because you might have an issue with paint or trim, or whatnot.  Instead, go out and buy a different car which you're not going to like as much and won't be as safe.

Can you understand why this is inexplicable and baffling to so many people?

KarenRei63 karma

In various interviews, you've justified your decision to override the views of your Model-3 owning readers - the same ones who ranked it the best car out there - and de-recommend it, based on the logic that, "Well, these buyers are unusual, they're unusually passionate, but some 'average buyer' won't like it as much."

On what grounds do you base the notion that there is some completely different group of people reading your recommendation advice than the ones who are already subscribers to consumer reports and have already purchased the vehicle (and love it)?   Why do you think that future buyers will be any different at all than past ones?  Has this ever happened with any other Tesla?  Because Teslas keep getting ranked by owners at the top of your lists.  What is your rationalle for assuming some sudden, radical shift in buyers, and using this to make such a radical, complete reversal of actual owner opinions about the vehicle?

KarenRei57 karma

I read with interest your study on driver assist systems. But unfortunately you left out one key system: My living room sofa.

How does my living room sofa ranks as a driver assist , using your methodology?

Capability and performance: Obviously, the sofa scores the worst possible marks, as it can't do anything. -2

Ease of use: No buttons at all. It never does anything. How much easier can you get than that? +2

Clear when safe to use: Obviously the sofa will never take over for you! +2

Keeping driver engaged: Since my sofa will never take over the car for you, obviously you're fully engaged 100% of the time. +2

Unresponsive driver: Since the sofa never takes over from you, it will never let an unresponsive driver use it, even for a split second. +2

My sofa gets a score of +6, handily taking the first-place slot as the best driver assist system. How can you defend a methodology which would rank my living room sofa as the best driver assist system?  Is it not rather silly that only 1/5th of the points have to do with "does the system actually work?". The others are easy to score well on when the software doesn't actually do things. Is not the only meaningful measurement simply the combination of:

  1. How does having it available as an option to the driver affect the driver's average safety?
  2. How highly does the driver appreciate having it?

The former - while best based on actual statistics - can be estimated by multiplying how safe it is relative to a human driver alone (better? worse because the driver stops paying attention?) times how much it actually gets used - the latter, in turn, being a combination of what it's actually capable of and how annoying it is to use.

How can you logically defend your methodology versus that?