MarkLovesTech
Highest Rated Comments
MarkLovesTech203 karma
The biggest two reasons would be either operational (the need to have multiple copies of the data that are global distributed via replication and partitioned via sharding) or productivity/agility related: developing with drivers that allow you to treat your database objects the same as the objects in your code is incredibly powerful and allows much faster development speed.
There is so much more but I want to keep answering other folks questions - let's continue the discussion on r/mongodb and u/MarkLovesTech
MarkLovesTech153 karma
Imposter Syndrome is real! Yes, in each role that I’ve taken on, I’ve become insecure about whether I was actually the person they thought they hired, on one hand. On the absolute side, I’ve often wondering if I’m up to the challenge. Over the years I’ve realized it’s completely natural and tried to turn it into being motivating rather than fearful. I consistently keep track of the top 3-5 ways I should improve (both in my family/personal and work life). Love to talk more about this!
MarkLovesTech138 karma
It's important to start by saying that DocumentDB is not based on MongoDB. It is based on Aurora PostgreSQL, a database with very different underlying architecture (which I was the GM of as well, when I was back at Amazon).
The reason DocumentDB can add replicas quickly is because they aren't replicating the data physically to different locations - Aurora PostgreSQL uses the Aurora storage system. While this feels great, the reality is that you’re now putting all your data at risk on a single shared storage system. With MongoDB, the storage is separate - and you can share it across data centers, availability zones, regions, and even cloud providers - and we manage it all for you.
When you have a new MongoDB replica node, it's a new separate physical host with its own copy of the data, which means it can be separated from the cluster and it will have the full database locally to it.
MarkLovesTech131 karma
I was a Caltech student and we have the privilege of it being VERY easy to get jobs at JPL because Caltech manages JPL. I worked in Section 331, the space comms group. In that role, I had the illustrious job of programming a prom-burner and hooking it up to a Vax 11/780. I also worked in Section 346 with some fabulous people - in that group we did semiconductor research and I got to system manage my first 1megabyte MicroVax with a 30 meg harddrive - and that computer supported a lab of about 25 people! And that is where I got to play with a scanning tunnelling microscope and look at the atomic surfaces of stuff. JPL was completely amazing.
MarkLovesTech404 karma
You know what, I’m not going to be able to spend time on this question in this AMA. It’s not really something I’m comfortable speaking ‘off the cuff’ about as it’s quite a serious issue. I’d love to follow up with you afterward and go deep on this.
View HistoryShare Link