RE: Hive API Node for under $750 - Hive can scale at very low cost.

avatar

You are viewing a single comment's thread:

I agree that the level of knowledge about witnessing and running nodes needs to improve and broaden.

This is precisely why I am both calling for far better documentation by Hive devs and encouraging people to run nodes on owned equipment, physically located with them.

If you have a witness running beside you at your desk and you can see the logs running all the time (and immediately when they go down) you are much more connected to being a witness than having it run on some "cloud" server somewhere.

You are more likely to learn how it works. That is certainly what I am trying to do.

According to @someguy123 you can even run a witness on a 4Gb RAM machine.

8Gb RAM with a 4/8 core CPU should be plenty sufficient for a backup witness.
My main witness will have 24Gb & 6 core CPU and my API node 128Gb and 6/12 core CPU & 2x1Tb RAID0 NVME.

Why do you suggest that I would be missing blocks with this setup if I made it to a top20 witness slot? I can always add a second backup witness node to provide further redundancy if necessary. I've got heaps of PC parts lying around. :-)



0
0
0.000
14 comments
avatar

If you have a witness running beside you at your desk and you can see the logs running all the time (and immediately when they go down) you are much more connected to being a witness than having it run on some "cloud" server somewhere.

I disagree, I can see logs from anywhere in the world at similar efficiency. While I am not against having your own equipment what so ever, I do have concerns running critical servers on a home network. Granted not everyone has to deal with snow I won't run even simple services like Poshbot on my home network as I don't trust a home connection being available. This is just my own opinion.

Why do you suggest that I would be missing blocks with this setup if I made it to a top20 witness slot? I can always add a second backup witness node to provide further redundancy if necessary. I've got heaps of PC parts lying around. :-)

I see no problem with that hardware, but you did mention a 9 year old laptop. The thing is, running a witness is a lot different than replaying one. While we do have the ability to do snapshots now, it can still take hours to transfer block logs and snapshots even with 1 Gbit Internet connection. The real problem comes during hard forks and emergency patches, you won't have a snapshot to rely on, and will need to do a full sync, witness nodes and API nodes.

Most witnesses are no where to be seen when critical shit happens, the witnesses that are and have the hardware, Internet, and knowledge to support agile recovery are who I want protecting the Hive blockchain. Unfortunately, this list is rather tiny and doesn't reflect in the witness rankings.

Posted Using LeoFinance Beta

0
0
0.000
avatar

Of course you can see logs from anywhere in the world, just as you can do Zoom calls with anyone in the world but my point was not about efficiency.

My point was about physicality, commitment, and the "in-your-face" rather than "out of sight and out of mind" aspect.


You are right that a lot of witnesses just set and forget and some don't even notice they are missing blocks for days or weeks.

It is precisely because they are using far away rented servers that they have less commitment and interest in learning witness skills than if the witness node was living with them.


I keep my own backups of block_log and do my own snapshots which I can transfer to whichever Hive node needs them over my Gigabit Ethernet LAN. Even the 330Gb block_log only takes less than one hour to transfer.


While I agree that the ability to replay is also important for hard-forks and emergency patches, replay speed is NOT about internet connection or where a node is located.

Its about single core CPU performance, RAM and I/O speed.

The fact is that the machine I specced above will beat 90% of servers in those factors.

In particular, server CPUs generally have a large number of cores but relatively poor single thread performance. High end desktop and gaming PCs by contrast generally have fewer cores but much better single thread CPU performance.


Regarding knowledge: if you want more people to have it then more effort needs to be made to spread it.

While I have got great assistance from people like @someguy123, @deathwing and @rishi556 the availability and accessibility of documentation on running Hive nodes is quite poor and out of date.

This is something that really needs to be remedied. There needs to be an Idiots Guide accessible on all major front-ends and hive.io.

0
0
0.000
avatar

It is precisely because they are using far away rented servers that they have less commitment and interest in learning witness skills than if the witness node was living with them.

I disagree, I don't think that has anything to do with it. Granted if someone buys hardware they likely care more about Hive than renting, but correlation is not causation.

I keep my own backups of block_log and do my own snapshots which I can transfer to whichever Hive node needs them over my Gigabit Ethernet LAN. Even the 330Gb block_log only takes less than one hour to transfer.

I will agree this is far easier on your own LAN.

While I agree that the ability to replay is also important for hard-forks and emergency patches, replay speed is NOT about internet connection or where a node is located.

Its about single core CPU performance, RAM and I/O speed.

The fact is that the machine I specced above will beat 90% of servers in those factors.

Yes, but that wasn't my point. My point is you cannot rely on snapshots, especially when a witness is needed most (during hard forks and chain down events).

Regardless, I have no problems with running an API node on a home network if you have the resources to do it, a witness node I am not so agreeable about. I merely was saying the Internet is more of the critical factor especially as the chain gets larger and providers make their data caps smaller.

0
0
0.000
avatar
(Edited)

Regardless, I have no problems with running an API node on a home network if you have the resources to do it,

Wow, I'm not going to take the time to count them, but what would you say, 20 comments or so to get to this point?

I'm so impressed that I'll ask you for the same information once again: can you share what you think those necessary resources are?

Edit: please forgive me if you've already laid them out - I got tired of getting lost in your maze and stopped reading but did want to see how you ended things. Even if you have, maybe you might consolidate your thoughts and dedicate a of post of your own on the subject? Remember, focused, with a title something like: Running an API Node from Home. ;-)

0
0
0.000
avatar

Regarding knowledge: if you want more people to have it then more effort needs to be made to spread it.

While I have got great assistance from people like @someguy123, @deathwing and @rishi556 the availability and accessibility of documentation on running Hive nodes is quite poor and out of date.

This is something that really needs to be remedied. There needs to be an Idiots Guide accessible on all major front-ends and hive.io.

Good job on all this, but I think it's very clear that there is heavy resistance to the idea (you can bet that more than one have been following this thread closely, and they've probably got Discord burning hot!), and that this is going to be a long haul marathon. To be honest, I wouldn't be surprised if you, and a few others too, might just end up being who does the documentation. Lots of work, but it'll be worth the effort. Count on me to help in any way that you think may be useful and possible. Maybe you might want to start a "Documentation Community". It's an idea that came up the other day on one of @taskmaster4450's threads. It might seem a bit like overkill, but it's really the only way to have admin control to be able to manage, update, and, ultimately control the content, and it seems to me that with the importance documentation carries with it, administrative control is absolutely necessary. Food for thought, and thanks for taking the lead. This is really good stuff. I, for one (and I'm also absolutely sure - on this one too - that I'm not alone on this), am lapping up every bit of your contributions on the subject! 😋

0
0
0.000
avatar

I admire your persistence and patience (I can't even bear to continue reading more of that "wet blanket power" - don't have the patience or the time). You're definitely doing the right thing and thank you so much for your efforts. My "reality check" response fell on deaf ears, but was nonetheless very revealing and worth taking note of. In my opinion, what you are proposing is a key step forward that we must take, it's in our own best interests, and we must press forward in spite of whatever resistance that may arise, no matter how great (or small) it may be. A big thumb's up!

0
0
0.000
avatar

I don't think he means to be a wet blanket, he just comes from the perspective of a professional IT guy.

As a profession they have become far too used to using "cloud" services.

Its easier, more comfortable, easier to scale (for centralised solutions) and no one got fired for using AWS or other "cloud" based solutions.

But what is sacrificed is control, independence and higher costs than buying and running you own equipment.

There are also substantial dis-economies of scale with massive data farms - George Gilder has written about this in "Life After Google".

Also there is a loss of hardware technical skills, although @themarkymark is impressive this regard.

Now the chickens are coming home to roost for those that want independence and free speech.

0
0
0.000
avatar

As a profession they have become far too used to using "cloud" services.

It isn't about cloud as much as Home data center vs real data center reliability.

0
0
0.000
avatar
(Edited)

And there you go again. It has nothing to do with "reliability". If anything clear has come out of this "conversation", it's precisely that. Or haven't your realized that you agreed that for an API Node . . .

How tiring . . .

It has everything to do with the IT professional in charge (and, of course, and this is without question, that means, by definition, having PHYSICAL ACCESS), and his or her systems and connectivity solutions contracted and implemented.


But nooooooooooooooooooo.

I've never been so infuriated!

Next you'll be telling us that Bitcoin wallets are better stored on data center shared servers (head's up: that's a joke that's meant to be an exaggerated extreme to try to make evident the obvious in a funny way, which means that a pedantic response on that is not necessary - save your breath).

0
0
0.000
avatar

You sound like you have lots of friends.

0
0
0.000
avatar
(Edited)

You sound like you have lots of friends.

And that's all you have to say.

Says a lot, don't you think?

(And I mean that seriously: all your arguments summed up in one nice little phrase. Or do you think serious issues are for brown-nosing only? Are you serious? This is about making friends?)

0
0
0.000
avatar

Nah just got tired of your dribble.

0
0
0.000
avatar
(Edited)

Hahaha. You don't even get it.

Anybody and everybody can clearly see that your previous comment was nothing more than yet another personal attack.

In spite of all your efforts to the contrary, you've actually done a wonderful job expressing yourself with absolute clarity.

I've said all the "dribble" I have to say on the subject, and it's here to stay, for all to see, forever, on this immutable blockchain, and, given such, 'I rest my case'.

0
0
0.000
avatar
(Edited)

I used to work in IT, and I would say that anyone relying on cloud services for sensitive data is not a professional (even though all the "kids" do it - hey, we live in a world where a 100k hack is solved by "printing" more tokens - not serious at all). This is my opinion, no doubt about it, but with the cybersecurity issues we ALL know about, I never ever had a client's sensitive data in the "cloud". NEVER! Not even backups. That's all done ON SITE, and multiple locations if the data is extremely sensitive.

To be honest, everything I've heard here on his part sounds extremely "amateur". Sorry, but that's my take on what I've seen him say (and how he's said it, etc., etc.).

0
0
0.000