Highest Rated Comments
Aginyan6 karma
I work in the tech sector doing data analysis (on the order of ~500M-1B rows, so nothing near the scale you guys do) and one thing I've learned is that looking at (nearly) raw data is often very useful in understanding what's going on w/ the system (whether it's bugs or plain unexpected behavior).
Does this habit still apply at CERN-scale? Or has things become so massive that you've gotta plan ahead and rely more and more on robust reducers/data quality checkers until it's at a size comprehensible to the human brain, and catch stuff later when things don't make sense?
Aginyan7 karma
Most of my work life revolves around Excel, largely for it's power to "make a not-that-bad default graph super fast" and the "hands on" feel I get from manipulating a few hundred thousand rows of data dumps to explore around. More advanced tools exist, but don't have the same feel of picking up a cube of data and manipulating it to see what's going on. So thank you guys for making my life possible.
Every time someone asks me why I still use a PC at work (instead of a Mac), I say "because I use Excel" and get many understanding nods in the data analyst/scientist community. I certainly had to re-learn a ton of muscle memory just so I can do the simplest things on a Mac for meetings (e.g. command+t instead of f4 =( ). What guided things to be become that way?
Also, I've found occasional weird bugs in office 2010, moving avg formulas that paste brokenly when going 'in reverse', date fields in charts that insist on being January even when the data clearly has another date, where should I report these?
View HistoryShare Link