Core dev meeting #54: Big one with lots of updates

avatar
(Edited)

Very long meeting today, we had lots of things to cover. Also big news, I have started my own witness Please consider voting for me

@Howo

What did I do over the past month? So I believe there was the optimization for hivemind but I think that was from last month.

I sent a bug fix for the issue where beneficiary payments were not included in the the payout in hivemind. So if you had a post which had the wrong beneficiary, a post with hard beneficiaries, the total payout would not show the correct value. Basically, all of the beneficiaries would not be included in the payout. So the value you would see was not the correct one. And literally, if you made a post which had 100% beneficiaries, you'd only see the curator rewards. So as a user, you wouldn't actually see how much money the post generates. That's a bit deceptive. So that's been solved. It's in a merge request and it has tested everything ready to be merged.

I started working on the beneficiary feature and...

@Blocktrades

So a quick question. Did that require any changes to the index ?

@Howo

It did require changes to the indexing, but nothing with an impact, it's basically the way it already works. It's like payout = author reward + curation. And now it's payout + author reward + beneficiaries. So it's not introducing any more data. I mean, it's changing the data, but it's not extra work being done. Basically there is like a virtual operation which has like the author rewards. And so we just added the, I mean, the field was already there for beneficiary rewards. And so we added that on top of it. I mean, I added that to be computed value for payout.

@Blocktrades

Okay. So it sounds like this, it shouldn't impact indexing speed.

@Howo

No, there should be literally no, no impact. Then I worked a bunch on the beneficiary feature for communities where you can set a beneficiary setting. I found out that you can set beneficiaries for a post way after you, you made the post as long as there was no votes. And so that, I, so that did create a lot of performance issues because then I can just assume that everything is going to happen in the same block, which led me to the realization that we need to, in order to do that, you know, in an optimal way, we need to have a muted reason, which is also a change regardless.

So the muted reason feature is basically a new field in all of the APIs. I made an issue if you want to look at it (https://gitlab.syncad.com/hive/hivemind/-/issues/219). It's important for me because then if a post was muted during indexing, I don't have to redo some of the processing. And it's useful just for general users because now there is like five different ways as to how a post may be muted. And so it's important to just let them know why a certain post was muted.

And that way, font ends can either choose to ignore a muted reason, like, let's say a front end wants to allow low reputation members to be displayed as non muted. Or if they just want to have like a little icon to be like, why was this post muted? "It's because you posted in the wrong community" or "it's because a moderator muted you" or something like this.

@Blocktrades

Okay, so this would show up in like a list when you when you get like a list of posts too, right?

@Howo

So well, yeah, there'll be a muted reason which would contain an array of enums

@Blocktrades

okay, so you can have codes for the different types of reasons.

@Howo

On the database side, we've just all like an array of numbers too, so that it's as like the number value of the enums. So it's as small as possible.

So I'm working on that. I realized that there was like a thousand SQL queries that I had to change to add the extra field.

@Blocktrades

Yeah, so would there be a function to get like the list of the reason with them, like strings associated with the reasons?

@Howo

I guess we can add a function as well. That's a good idea. So that the frontends don't have to worry about it. I mean, so I was either thinking of adding like a function with strings or directly returning the strings like doing the translation. But I'm afraid that like, people will then start comparing strings

@Blocktrades

yeah, I agree. It's not I thought about it too at the beginning, but it would also be just a lot of data, right? That could get long.

@Howo

I mean, we would store it in like number format, and then we would just return, like do the translation as the API..

@Blocktrades

I think it's better to just, I agree. Let's just have codes for it. But we should have a way that somebody can look up on the codes.

@Howo

that's a good idea. So I'm gonna add that as well, I guess. That's like one thing where you query like muted reason, any restrictions, all of the reasons, it's probably the easiest way to do it.

@Blocktrades:

Yeah, I think you just pick up a list of all of them and then you can use it in your own code.

@Howo

Yeah, and realistically, there will be like 10 of those. So yeah

@Blocktrades

it also allows us to do like language, you know, language translation too. So yeah, that's what I've been working on. I'm still in the early stages for that. The only issue that I have is that the muted reason in is computed at two different places. So there is sometimes you get muted like directly in the database, like let's say if moderator mutes you, or sometimes the post gets muted, but it's computed at query time. Like for instance, if you have minus one reputation, the post gets flagged as muted, even though the database is not muted.

So that means I'll have to compute that like muted reason list in two different places. But it's not the end of the world. It's just like the if someone decides to query directly from the hivemind database, then maybe it's all confused. But I don't know that anyone is doing that. And we want people to move away from querying the hivemind database in favor of HAF anyways.

@Blocktrades

Well, I mean, hivemind is HAF now.

@Howo

Yeah, I know. But like in the sense that we want people to build their own apps, like you shouldn't want to, to like query hivemind instead, if you want to have post information, you should just like build your own hivemind, your own HAF application to do what you want.

@Blocktrades

Yeah, I mean, I honestly think it's likely to be a mix. I think people will have their own stuff in HAF. But I suspect it front ends that are doing that are showing posts are probably always going to be making hivemind functions, I suspect.

@Howo

I mean, yeah, but those front ends are going to call the hivemind API so that won't be the issue. I'm talking about a Dapps that's going to spin up hivemind then do SQL queries like raw SQL queries. I don't believe anyone is doing that slash should be doing that.

@Blocktrades

Yeah, I mean, at that point, they'll probably just using they could be using the same SQL calls that are hivemind uses, they just wouldn't be doing it through an API call.

@Howo

What else? I believe that is it.

@Blocktrades

Okay. Let's see, I guess cover what we've been doing next. I'm just some of it because it's it's been a long time. So it's been a lot of stuff. Let me just go through the list here. So one of the things we did along the way was if you if like hive, if hiveD crashes, for instance, or your computer shuts down, either one, it can be in the middle of writing a block to the block log, which can leave you with a basically a corrupted block log, because it's only got HAF the block written to it.

So we added a feature added a command line option that you can use that'll detect that condition and automatically sort of throw away the last the half completed blocks so that you have a solid block log again, without having to download a whole nother block log. It does require a replay though at that point. So but that's still better than having to at least you don't have to download a whole another block log from somewhere.

Let's see what else we made some changes to the JSON serializer. I won't go into that too much just to handle numbers more standard way. This is the FC, the JSON serializer that's inside FC, the library that we that use by hiveD.

We're also working on the beginnings of being able to do block log pruning or various sorts. Basically, here are the ideas you might be to run like a light hiveD node that doesn't have the entire block log. So you could basically just have like the last couple blocks, for instance, or we there's various alternatives here. We talked, there's been some discussions about it in the mattermost channels.

So probably most you guys who are aware of it, but just people aware this would give us an opportunity to have special hiveD nodes that are very, very lightweight. Don't require a lot of this space to run. Let's see. We've did some work on the crash handler. Again, I don't think this is into interest to most people unless they happen to be running hiveD nodes, but this will just give us better error reporting in the case of a crash. And more, we had that before, but sometimes it didn't work so well. So let's fix some of those problems.

We added an API to hiveD, which allows you to check the status of a node while it's replaying. So basically, while you're replaying a node, it'll tell you what block number it's on. So you can use this, if you've got any, if you're just have a tool that wants to find out how long before the hiveD node comes back into sync after the replay, you can use this this API called Aquaria. It's basically the only API node that will work to a hiveD node will respond to while it's replaying before it's entered live sync. We're using it for various things like HAF, HAF apps can use it to check the status of the hiveD node that might be feeding them, for instance, but in general, anything that wants to talk to a hiveD node wants to know how long before it's going to be ready could use this API call.

We did work on the features that was requested in a post. The idea being that when you vote right now, when you vote after you vote once the next vote you have has less power and the third vote you have after that has less power and basically your voting power dwindles temporarily based on previous votes because it uses how much mana you have. So this change allows you to basically vote full strength until you run out of voting power instead of just dwindling each time. So instead of taking HAF a step along the way and never getting to your goal you can basically exhaust your mana pretty fast in your voting just by voting full power each time. So there's a bunch of discussions about this and some people pointed out that there was also a benefit to the old way which allowed you to didn't allow you to vote with your full strength and I think the solution that @Smooth proposed was that we allow UIs to support the option of voting without your full strength on the second votes and so on.

So basically we could still have front ends allow the old behavior which didn't use your full strength of votes and basically be an option as to how you want to vote.

I hope I've explained that clearly. If anybody's read the discussions on the blockchain and the high post they probably know what I'm talking about but otherwise feel free to ask questions about that now. Is that familiar to everybody? Yeah. Okay I won't cover that more now then.

We're also doing some work related to eventually making RC consensus not that we're doing it now but there's preliminary steps that have to be done along the way so we've completed some steps towards that. As to whether that'll actually happen of course we'll be up to the decision of the witnesses in the long run. We made some fixes to the account history RocksDB regarding getting reversible transactions. We found some problems there. Now even though the account history RocksDB is mostly deprecated now some people are still going to run it just because it's a little more lightweight than running a HAF server so probably exchanges and things like that will still be depending on that plugin for quite a while so that's why we made the fixes there.

We added some documentation to it's in the Hive repo itself. It's basically a bunch of .MD files that describe all the various Hive operations that can be put into transactions.

We added a bunch of tests to Hive. I'm not going to get into all the tests that were added. It's a bunch mostly testing various Hive operations that we just hadn't got around to testing yet. We also made some improvements to the automatic testing system that continues integration system in order to detect in order to make the tests more stable. We've had problems where sometimes the test would fail even though there was nothing wrong.

So the other thing we're working on lately is BeeKeeper. BeeKeeper is basically a tool for storing keys in a secure way. They are basically stored encrypted in memory and the idea behind BeeKeeper is to have a common way that apps can hold keys securely. So we've got two interfaces for it nowadays. There's a binary interface for like desktop applications and there's also a WASM based version, WebAssembly based version, which can be used by like JavaScript applications. So things like HiveBlog and PeakD and stuff like that can potentially incorporate it and store the keys that way.

It was originally being developed for Clive, which I'll talk about next, I guess. So is it clear what BeeKeeper is? I don't know if I've addressed it, explained it well enough that everybody understands the idea behind it.

@mcfarhat

Yes, but how does it work? I mean, is it a plugin extension? What is it exactly?

@Blocktrades

So the WASM version has a TypeScript interface around it. So WASM itself, it's basically WebAssembly code, so sort of like lower level JavaScript, if you will. And it's got a TypeScript object growing interface. So basically you would just incorporate the code into your application directly. It's like a library.

@mcfarhart

I love that it's TypeScript. I mean, it can be easily integrated to a JavaScript interface. That's fantastic. So it could be an alternative to a keychain or just using the...

@Blocktrades

That's right. That's right. It can be just a directly local library used to manage the keys.

Let's see. So Clive is basically a command line interface wallet. But unlike the existing command line wallet, it has like a... I don't know if I say curses interface, if that don't mean anything to anybody. But it's like a console interface where you get to use the whole console, not just one line. So, sort of a console user interface. And we've been adding features to it recently. So originally, it just had the ability to do things like transfer funds. We've added a feature for savings, for managing savings, the saving features of Hive. We've added functions for loading and saving transactions. So this is for basically doing offline cold wallet signing and things like that. So that's going to be quite useful, I think. We also added features for governance, because that's one of the other things that a lot of people want to use a secure wallet for, is for voting for witnesses and voting for proposals, and also for changing your proxy.

It's also got like a... We also have interface that works more like a traditional command line tool, which is a single line. And you can use that... Basically, you can basically control Clive through like a bash script, for instance, and give it commands through a bash script. So, sort of for more automated wallet work, if you want to write scripting for it.

And Gandalf has just posted more info on Clive features (https://peakd.com/@thebeedevs), or posted on a post. So I won't go into too much more detail. So, a bunch more fixes were done to it as well. And we've got... Basically, you can download it as a Docker, so it's super easy to just try it out. So, I recommend people follow that post he mentioned, and you can actually just try it out real time and see how it works.

Let's see. So, another thing we've got is something called Wax. And this is like a low-level module that's used by... We're using for other parts or other tools, but it's a kind of common level. For right now, it supports Python and TypeScript. And this basically allows you to hook into a bunch of functions that are built into Hive itself. So, it basically incorporates some code from Hive into a separate library. So, you can do Hive-like things without having to talk to Hive.

Let's see. What else? So, we made one improvement to HAF a speeded up the performance of the most... I'd say one of the most important functions in the Account history, which is get Account history itself. @mahdiyari reported some cases where filtering account history was slow, so we speeded up that query quite a bit.

hivemind. Well, two main changes to hivemind. How I mentioned one, we incorporated his changes for new community types.

@good-karma also added a feature for reporting reblog counts. So, that's added to some of the queries.

Let's see. Then, I guess, on Hive itself, there's been quite a lot done. That's... Hive is where we've been spending quite a bit of our time. And so, we've been doing a lot of performance improvements, experimenting with various ways to speed things up. I made a change to do async commits that speeded it up about 30% the replay time for HAF. And I'm also trying a suggestion from another guy here. @mahdiyari had done some preliminary tests on it too, which was good. And he showed it basically twice speeded up. So, we're running full tests now, which will allow you to basically, hopefully, massive sync HAF about twice the speed it does now.

There's been a lot... We've also figured out a bunch of other things to speed things up. We figured out how to speed up the time it takes to... When HAF reindexes from massive sync, it has to create a bunch of indexes in your database, and that usually took hours. And so, we figured out... And then, it also creating foreign keys takes several more hours. So, we figured out a way that we don't really need to verify the foreign keys during massive sync. And that basically cuts a couple hours off the massive sync time, just doing that alone. Then, they get checked afterwards, though, during live sync.

So many things in HAF. It's hard to find all the changes. We added support for recognizing For basically structural, like, record types on the SQL side for hive operations. So, this will make it easier for people who are writing HAF apps to basically process high operations without having to write their own code for doing that.

We added a bunch of the default plugins to the hiveD node that's used by HAF. So, basically, you'll be able to use a single hiveD node, both to feed your HAF server and also to reply to API calls. So, we're basically trying to make a HAF API node as small as possible in terms of resources. And along the same lines, we created a new repo called HAF API node. And what that node... What that repo does... It's a pretty small repo. Basically, it's a bunch of Docker-related stuff. So, it allows you to use Docker composed to basically launch and run a HAF API node, basically an API node, a hiveD API node, that has everything configured to be a public API node.

As part of that, one of the implicit ideas is you'll be using ZFS for your file system. So, what we'll be able to do is basically deliver ZFS snapshots of a fully in sync API node. So, if you want, instead of doing a full replay on your own, which will take days, still the best, you'll be able to just download one and have it running really... Basically, the time you can download it will be pretty much the time it takes to get it started. Maybe five or ten minutes or 20 minutes after that to sync up. But really, really quick.

So, we should be able to launch HAF API nodes. I mean, get... I have API nodes running very, very quickly once this stuff is all done. It's pretty close now.

@BriandOfLondon

I'll just ask you a quick question. What's the front... What are you using like as the reverse boxy at the front? Is it Docker composed?

@Blocktrades

It's actually a kind of complicated question. There's several things being used. Caddy's being used for rate limiting. We've got Jussi in there for handling the old style calls. We have Varnish in there for handling the new style calls. So, we've actually... I guess I'll bring that up. We've actually got two different types of API calls now. So, we've got the old sort of JSON API calls, still supported. But the new API calls we're all doing is REST calls. So, that's why Varnish is there. Varnish is there to basically cache the new style calls. And we use Jussi to cache the old style calls.

When I say it sets up the entire node, I mean, it sets up the entire thing. I mean, it's... So, with Docker Compose, they've got things called profiles. And a profile is like a set of applications you want... Dockers that you want to launch. So, we've made up different profiles for different things. So, like, there's a basic profile called core. And if you just use the core profile by itself, you just get a hiveD node and you get a HAF database. And that's it. If... Then there's another layer which we call admin. If you run admin, that'll add... If you add the admin profile, that'll add a PG Hero and a PG Admin Docker.

And basically, that means... With those, PG Hero is like a web interface to your database that accumulates statistics on the API that the SQL calls, the SQL queries. So, you can use that for, like, an analyzing the performance of your database.
PG Admin is another web-based interface for communicating to a database. But it's more in terms of administration. So, PG Hero is really focused on optimization of queries and identifying performance problems. Whereas, PG Admin is more generalized tool for, I guess, exploring your database and making changes.

So, we use those quite a lot in our work and have. So, we have those as options you can set up. So, even if you're not doing development, I think a lot of HAF API node will probably find it useful to run those, too, just for analyzing problems with their... If they have any performance problems on their server for whatever reason.

there's another... So, then there's another profile called apps. And apps will basically start up a bunch of the standardized applications that we're seeing HAF API nodes will typically want to run. So, right now, that's hivemind, Hafah, and also the new HAF Block Explorer API. So, those three apps will be added if you add that profile. And then we have a final profile called Servers. And the Servers profile is basically addresses the question you were asking, Brian, which is about setting up, like, the proxies and things like that. So, Servers adds things for varnish, for caddy, for jussi, et cetera.

So, that one's basically if you want to expose it to... When you're looking to expose it to... The Servers layer is when you're looking to expose your interface to external, you know, external people or to external sites, web servers.

So, yeah, I mean, basically my goal here is to have it so it's very minimal steps to set up your server. You basically do something like first create a ZFS pool. And then we've got a script that basically lays out that pool, sets up all the file systems on that pool. And then after that, you can basically just run a command, a Docker compose command, which will start fetching all the Dockers from, you know, from Servers. And you basically have a system up. And then you also want to... We probably want to fetch a snapshot to fill it up with current... Or you can replay from scratch. It's up to you. It does take a while to replay, though, everything.

But I think we did cut down quite a bit just recently at the time. So, I think hivemind. Like I said, we've probably cut down hivemind replay as well as HAF replay and HAF. I haven't got the final numbers yet, so I can't report them with complete confidence, but I believe it's going to be about twice as fast. So, probably, I think we're talking... It went from about four days, four and a HAF days to about two days and 10 hours now for a full hivemind replay.

I probably missed a few things. We've been continuing to do a lot of work on the HAF block explorer. So, there's a dedicated team working on that. And there's two parts to that. There's the backend, which is basically all the API calls. And that'll be generally useful to anybody. So, I think even other... Basically, other HAF block... Other block explorers will use those calls. I think they'll be useful because they should be more efficient than the calls they're likely to be able to do now to do analysis.

We've also working on the front end for it as well. So, one of the things that it recently was the ability to do, like, searching blocks and searching posts and comments. So, that's a little heavy weight, though. So, I'm personally still wondering if we shouldn't move that to a separate application, but we'll see how that goes.

Other than that, I don't want to get into everything else we've done. We've made a lot of changes on the CI side, continuous integration, and a lot of things to make things lighter weight, consume less resources. But that's probably enough talking for me now. So, I'll pass it on to whoever else... Anybody else want to discuss what they've been working on recently?

@howo

Actually, I have questions about your stuff. And also, one thing I forgot to mention is, in hivemind, there is another merge request rate to merge, containing, like, a helper tool to basically push marked blocks. Like, I found out that when I'm working, it's quite difficult to just add one extra block with one operation. Sometimes you want to just test your query just once. So, I made that. It's just a simple helper tool where you just execute it and it's going to increase the block by one and then populate that to HAF and then hivemind.

@blocktrades

So, it adds another block, a mock block, but what's in the block?

@howo

Well, whatever you want. Basically, you pass it as param, like, a mock block and it's automatically going to figure out what block it should be. So, it just gets added to the next block ahead of what's currently synced on hivemind.

@blocktrades

Okay. So, is this a script or how does it work?

@blocktrades

It's a standalone Python script. Okay. It's like a wrapper over populate_mocked_blocks. Okay. I saw the commit. I mean, I saw the merge request. I hadn't looked at it because it's in the testing side of hivemind, which I know very little about. So, I kind of left it to Bartek to look at. I don't know Bartek. Bartek, did you have a chance to look at that yet?

Bartek

I didn't see yet, but I will take a look on that and also the separate merge request from howo which one were made there. Okay.

@Howo

It's really not urgent to merge it. If it's important to merge at all, it's a tool that I find myself useful and maybe you guys will find it useful as well. But really,

@blocktrades

if I made tests, I would definitely probably use it.

@Howo

So, I mean, I pushed it because I made a bunch of other scripts like this, like want to dump and redo the database and stuff like this. And this helper script felt like it could be useful to others. So, I pushed it. But if you feel like it's not, if not that's useful, then I'd be happy to drop them at request. Okay. So, yeah, that's the other thing.

And regarding all of the work to have quote, unquote, light nodes, do you think they would apply for history nodes or for witness nodes? What's the use case here?

@blocktrades

So, I mean, it's kind of open. We haven't really defined it well yet. So, certain, I don't, I really wasn't intending abuse and witness nodes at all. I think witness nodes, at least, you know, I think witnesses should keep all the blocks personally. But for other nodes, I mean, there's a lot of other nodes out there besides witness nodes, right? Every application means that big, big time should probably consider running their own node, I think, just so they don't load, load down the other, you know, the public nodes. So nodes like that. But so now when somebody this, I mean, if we lower the cost for those guys to run their own node, it would make much more sense for those guys to run their own node.

And also the idea for your personal node, right, you may just want to have your own personal node to totally trust your, you know, situation better. In that case, you know, every, every, every Hive user shouldn't be consuming 500 gigabytes of data to run their own local Hive node. Yeah.

Bartek

Exactly. Actually, our intention was to make that option, especially for HAF nodes, because HAF requires a lot of space and making free another 400 gigabytes because of missing block lock could help here. So I hope that could be useful.

@gtg

I'm currently running Hive consensus node on the machine that eats seven watts of energy, eight up to 10 when we're synching. But the biggest limitation is that I need a big drives here. So that would be a perfect use case.

@mcfarhat

I probably won't beat your seven watts. But I mean, when I saw the tutorial by block trades about the HAF API node, I mean, we wanted to, to upgrade our active Hive API node for a while. So this came in handy actually on time. So we just rented a four terabyte server. We're going to be signing up. So I'm hopeful the process goes as smooth as it's explained. I think it should be, but yeah, we're going to try it out over the course.

@blocktrades

I'll say we're still making improvements. I'd hope to release this like, you know, a week or two or three ago. We keep just finding improvements. But I think we're down to the last little bit. So I'm hoping within a week or two, we will make an official release. I know just very few things left that I need to be addressed. I mean, it works now. Don't get me wrong. You can set it up now. But we still, like the speed ups, for instance, haven't been yet merged in yet, a couple of speed ups haven't been merged in yet. So it still takes quite a while to synch Hivemind.

@mcfarhat

Okay, yeah. I mean, we're going to give it a test.

@blocktrades

Yeah, absolutely. And if you, you know, any feedback is useful. It's, you know, we do as much, we're doing a lot of testing in-house. I've got probably seven monster machines that we use for testing. And I usually try to keep at least four or five of them running with tests, you know, at any given time. But outside testing is always useful because outside people always try things a little bit differently than what we do. So that way we find problems that identifies problems. We don't mean, you know, no, they're there because we never tried to do something quite that way.

@mcfarhat

Regarding the block explorer, we're going to be setting up also another one. So instead of reinventing wheel, I mean, we're going to reuse the code that you guys have. Is there a test environment right now that runs the front end so that we can at least have a look at how it's functioning and maybe set up our own nodes so that it interfaces using the same GitHub?

Bartek

We have that instance, but it is available internally, but we plan to publish it soon. And I hope it can be available maybe next two weeks.

@mcfarhat

Okay, so let's say we set up the HAF node, the HAF API node, we can connect once we set up the GitHub clone of your repo, we can connect it to the HAF API node and then we can start them, hopefully running a block explorer testing it out and seeing how it functions.

Bartek

Actually, you can of course start a block explorer and because it is a regular HAF application, and you can also start a UI, both parts are dockerized. So the processes should be repeatable and succeed also at your site. So yeah, that should be possible. And I hope the documentation written there is sufficient to be repeated by you also. If not, please write some issue there and we will try to improve documentation.

@blocktrades

So just to sort of give you the way I would do that right now, you'd first go to the HAF API node repo and read the instructions there, which is basically telling you how to set up your HAF API. Yeah, so that'll give you basically, if you configure for the apps, you'll get the HAF block explorer as part of that. So that'll, it'll sync up your HAF block explorer and it has to replay just like the other app, just like hivemind. So once it's, and it takes, it also takes a few days, but once it's replayed, you basically have the API support and then you just run the UI in order to talk to it and you basically, at that point, you have a web server, you have a front end running with the block explorer.

@mcfarhat

Excellent, excellent. Okay, okay. Because for me, I mean, I just pulled, I cloned the block explorer front end, the UI, and I tried to run it and of course it needs an API for the block explorer to read.

@blocktrades

Yes, that's right. And that's, so that's in the HAF, that's the HAF, that's the HAF API node part will give you that.

Bartek

The important note for block explorer, both block explorer repos is to use develop branches.

@mcfarhat

Yeah, I'm always using that. Yeah, absolutely. Okay. Maybe I'll share the quick update from our, and I might have mentioned several weeks back, I mean, we've had, we've been trying to revamp the notifications for Actifit and we tried to use the blockchain notifications, but we found few issues. I think I mentioned them on a call. So, I mean, right now we are parsing the whole blockchain on our own for Actifit, so we revamped our notifications. We're not using HAF yet, but I mean, we have, we set up some sort of a Firebase notification push notification system. So, we just set up a whole new approach to notifications and we launched a Chrome extension for notifications for the whole Hive ecosystem. I'm not sure if you guys saw it, it's called Hive Alert. It's not core development, but it's something that, because I mentioned it before, so I wanted to share it quickly. So anyone who wants to get notifications on their browser, they can use that with the Hive Alert Chrome extension. It's open source, of course, and it's not HAF, it's push notifications via Firebase and we're just parsing the whole blockchain for that.

@blocktrades

Well, I mean, that's, you know, HAF is doing a similar thing, so it's not, it's the same, I think at some level, just a different technique. And I guess I should also think, I mean, I, we call this core dev, but I mean, I think it's all stuff that anything that, it's, I think this meeting's about any kind of software that, you know, all the apps are interested in. So, yeah, libraries of general, I think libraries for, are certainly, I consider core, any kind of thing that's used by apps developers, I think is what these meetings are supposed for.

@howo

I initially made sure to call this core dev, because I didn't want this to turn into, like, a town hall where all of the dev apps would share their let's say, advances on something that only concerns their Dapp, but now there's been a few years, the, everyone pretty much got the thing. So anything that concerns the other dev is free game in my opinion.

@mcfarhat

Quick question, block trades. You had a, a release candidate coming soon. Is that, is that planned still for December or with that?

@blocktrades

Yes, it's still planned for December. It was planned for like two weeks ago. But it's still planned for quite soon. I would like to say next week, but I'm, I'm kind of shy at this point of saying that after being burned a couple of times.

@mcfarhat

Looks like we're going to have fun on over the holidays.

@blocktrades

We'll try to avoid that at least.




0
0
0.000
14 comments
avatar

Greetings friend @howo, it seems to me so honest that this exchange of proposals in order to improve the operation of the platform is in a public way, it seems to me that you are doing very well, you have my support and my vote as a witness.

0
0
0.000
avatar

Thank you @howo for your vote, I'll keep an eye on you for any support requests.

0
0
0.000
avatar

Thanks, a lot indeed! Too late for me now but will check it out tmrw!

0
0
0.000
avatar

Excellent. Thank you for taking the time to explain each of your steps. I already gave him my vote. a thousand successes @howo

0
0
0.000
avatar

That was a good meeting!

I also love that you posted a video + the whole conversation as a text message, it makes it really easy to follow.

0
0
0.000
avatar
(Edited)

Thanks it's a huge pain to edit the conversation text but I feel like it's worth it

0
0
0.000
avatar

Agreed! Reading keeps me from getting distracted! How did you create the transcript?

0
0
0.000
avatar

https://openai.com/research/whisper and a ton of hand editing

0
0
0.000
avatar

Yeah I was wondering about that. Typically these tools don't know about user handles :) Thanks a lot though for taking the effort in sharing it all with us! I recently try to keep on top of the latest around here.

0
0
0.000
avatar

yeah wow thats so much work but its awesome you did that!

0
0
0.000
avatar

Howo,next new witness in my list. Blocktrades is witness years ago.

0
0
0.000
avatar

Ho Ho Ho! @howo, one of your Hive friends wishes you a Merry Christmas and asked us to give you a new badge!

The HiveBuzz team wish you a Merry Christmas!
May you have good health, abundance and everlasting joy in your life.

To find out who wanted you to receive this special gift, click here!

You can view your badges on your board and compare yourself to others in the Ranking

Check out our last posts:

It's the Christmas season: give your friends a gift
0
0
0.000