Highest Rated Comments
SnArL8173 karma
Isn't this counter-productive to cloud-based storage? The whole point of the cloud (vis-a-vis storage) is consolidation of storage aggregates using multiple SAN devices. The backend provider eats the cost of the redundancy (typically 1 drive in 16 for a RAID5 storage aggregate). If your technology splits the storage among multiple providers and adds redundancy on top, isn't the end user paying for that redundancy instead of the backend storage provider?
I guess what I'm asking, is, doesn't Sia's architecture remove (or at least reduce) the economy of scale that makes cloud-based storage cost-effective?
SnArL8172 karma
I can understand paying a premium for trust. But who holds the cryptokeys to my data? Is is JUST me, do we both have it, or is it stored as a hash using my authentication creds?
How is the distribution handled? Is data replicated from the master copy in near real-time, or do I run the risk of updating a file in LA and having my friend download a previous version in London because we're accessing the data from different storage locations? (Obviously, I don't want your proprietary distribution details, but many a doctoral thesis has been written on geo-diverse data synchronization). How do you guarantee data integrity across multiple storage backends?
How does Sia handle the loss of a single storage backend? What about multiple backends? What happens in the event of data corruption on one of the mirror copies? I know it's a near statistical impossibility, but what happens if multiple backend copies get corrupted?
SnArL8173 karma
A simple lock flag would probably work best for you guys. As the client is recomputing parity information, have it send each storage host a "client is updating file xxxxy" packet. Once it's uploaded the new data to all hosts, send an all-clear packet. This also helps prevent data corruption caused by a sudden loss of the client. It's handled like this:
Client wants to make a change to a file. Upload changed file to storage host. Storage host stores the data in NEW blocks. Once the upload is complete, the host drops the old file and puts the new data in its place. This way, if the client system crashes during an upload the previous copy can still be downloaded, and the upload can be resumed once client functionality is restored.
SOURCE: Am a UNIX/Storage Administrator and I deal with block replication all the time. :D
View HistoryShare Link