Orange Man, Orange Coin, No coincidences!
(media.greatawakening.win)
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (48)
sorted by:
They're 4MB, but thats too high, should have never negotiated at all
Bitcoin has no scaling limitations, you're misinformed
Misinformed how?
Blocks are not 4MB. The artificial 1MB limit in the code is still there in the Bitcoin Core software.
BTC also invented a new term - vByte (which isn't used in ANY other computer application) - to be able to say blocks are 4MB when they are not.
SegWit just made it so that signature data "doesn't count" toward the 1MB limit so it can fit more transactions into the 1MB space.
You still didn't answer my question. Why was that decided as the optimal limit?
We are all walking around with 5G smartphones and streaming Netflix. Broadband is everywhere. If you are a serious miner there is no reason you wouldn't have the infrastructure to handle 32MB or 128MB blocks every 10 minutes. If you are a normal user there is no reason to download and maintain the entire blockchain when you can use public nodes. So why was 1MB / 4MB chosen as the optimal value and why is this a forbidden question that you can't answer?
"decided"
That's just what it's always been, you just mad you didn't decide before Satoshi?
Maybe your shitcoin should have proposed a better softfork instead of going full retard on a hard fork, then maybe it wouldn't be irrelevant?
How big is a video chunk? 🍿
Big blocks = longer propagation time = bigger head start for next block = centralization
You seem technically very clueless, we're still nowhere near scaling limitations and transactions cost pennies
So maybe you forgot that Satoshi always planned on scaling up the limit. He left specific instructions about it. It wasn't even a complicated solution. Increase the limit to 2MB, then 4, 8, 16, etc. always ahead of demand and leaving headroom for more transaction capacity and hardware will allow scaling according to Moore's Law. Blocks were never supposed to be always full.
Maybe instead of shit talking you could actually bring a sound argument?
The hard fork was a failsafe because the Core software got infiltrated and compromised. That's when the censorship started and attacks began against anyone discussing increasing the block size. A hard fork just means everyone has to upgrade their software to follow the same consensus rules. It's a normal upgrade path for open source software, not some nefarious trick. Imagine saying that you can never upgrade your operating system and no one else can either, because it should always be possible to run a node on a RasPi.
32 -128MB is pretty normal for video chunks and how long does that take to download on a residential broadband connection? Less than a minute? What about a commercial connection like miners would have? Seconds?
You are saying that 1MB is absolutely the optimal limit of capacity for the network when blocks are always full to the point that the backlog is usually over 1GB and fees can reach hundreds of dollars per transaction, and yet somehow it's "nowhere near scaling limitations"? How does this make sense to you?
When is the last time you looked at a block explorer?
When is the last time you sent a BTC transaction wallet-to-wallet?
Why is having 1MB blocks okay but 2MB blocks would disrupt the network according to you?
Oh by the way, Bitcoin Cash just solved this problem forever since it is now algorithmically adjusted block size with a 32MB floor. It runs totally fine.
No, I don't owe you one, the argument was settled... your ideas failed and your coin is worthles... i'm here for my own entertainment
Go run an pre-infiltration version then
More like 1-5MB, Bitcoin would ideally be 300kb to communicate across HAM radio... it's global money after all, needs to work over 3G and high orbit sattelite
I literally run a Bitcoin company