RE: [ENG/ITA] Python & Hive: My Scripts are Ready! My First Project is Completed :)
You are viewing a single comment's thread:
I know that they sucks, but that was exactly my goal: to show everyone that I'm trying to do something with what I learn and getting suggestions from more experienced coders
I never said they suck.
I just couldn't read it in one pass.
If it works, it's ok.
The infinite loop is an eyesore, though :D
error checking
error checking is lame. Don't worry about it.
I'll blog about that soon, too.
You know this meme?:
[https://www.reddit.com/r/ProgrammerHumor/comments/16nhu63/printbellcurve/]
0
0
0.000
They seems to work but they crash sometimes (I at least have to catch some errors related to JSON files, as they seem to create troubles from time to time... or should I say from block to block? 😂) and I didn't test them extensively enough to be sure that there aren't some bugs I'm not aware of.
Stupid infinite loops 🤣 and I tought that they were cool ahahahah
Nope, but I get it ahahah
Infinite loops are not always stupid. You are also right about catching errors that may have got nothing to do with your code. And there can always be some bug you couldn't have seen unless you had made those print statements. Talking about catching bugs, I must soon release a new version of my 'fetch_liquidity_pools.py' script, because I happened to catch an enormous bug and also made the script blazingly fast by fixing it.
Super cool! I'm very curious to see your updated version and learn new stuff from your work!
On a side note, I aslo experimented a bit with your script and I tried to introduce a "Session" (not sure how I can describe it 😅 it's part of the "request" library) to see if it could make the script even faster... and it worked! Not sure if you may be interested in seeing it :) it's a very small change and it may help you make it (ultra)blazingly fast!
Oh cool, a new toy!
I bet it made the original script faster, but now that I tried it with the improved one, it surprisingly made it slower. I guess it has something to do with the fact that when I originally caught the script doing a pool details for each token pair separately, I decided that since it wasn't possible to retrieve pool details for certain accounts only, I would set it to retrieve details for every pool in one go. This made the script faster, but now that I introduced
requests.Session
to my script, it added just a tad overhead, making it somewhat slower than previously. Weird.But I already know what I will use the
requests.Session
for. That is my main script that controls among others, thefetch_liquidity_pools.py
script. Thank you for possibly solving a serious problem for me, as the main script was giving me some serious gray hair!I'll be checking this out more thoroughly for sure! :)
!PIZZA !BEER !WINE
:)))))
I'd be extremely happy if it will help you at least a tiny bit, as it would be my first contributione ever to someone's else script! Just a very, very, very, veeeeeery small one, but that would still be something :)
Is this the script you shared in your latest post? Or an even bigger and more complex one?
Btw, here's your original code with the "Session" included, just to prove that I really experimented with it 🤣 (I don't know how to highlight my edits, but I only opened a session, added it as a parameter to the API calls and "prepared" the calls with the slightly different sintax of a session... but I have no idea if what I just said makes sense for you or for anyone at all!)
Hmm... interesting. It appears I didn't know how to use the session that I opened in 'main', and only used the
requests.post()
function instead ofsession.post()
, this is what made my script slower. So my script still used the old way of fetching data, while it had opened a session for faster communication. I timed my old script with your enhancements:My current script, that I spruced up with your enhancements:
Oh, I notice you had added a timer inside the main loop, nice touch. I am lazy and didn't bother, since I could 'time' it in the terminal. But yep, there were some repeating reuquests in the original script that made it way slower. Here's the current one:
Edit: I noticed that I still hadn't added the 'session' in the arguments of the functions. So I did, and updated the script here to reflect that. The problem is, that now it's slower again. The 'broken' script was a tenth of a second faster, but this one is correctly establishing the session. According to ChatGPT, I might as well not use 'session' anymore, since I only make one request. Session only makes things faster if you do multiple calls to the API. But anyway, it was a fun experiment.
Oh, and I forgot to comment about the main script I'm working on. It's actually a balancer for liquidity pairs. I might write about it as soon as I iron out the wrinkles.
Well, I had no idea you could do that 😅 what you can do through terminal is still a bit oscure to me, because I'm not used to it, so I have no idea of its flexibility and power!
Wow, this one is crazy fast! Now I'm going to read your new post :)
It's the only thing I understand of this "Session" stuff 😂 because I still have to figure what parameters/values it keeps and if it can be used whatever I make more calls to an API or only if the calls are somehow linked or else... I have no idea tbh ahahah
Heh, it wasn't immediately clear for me either, although I had an inkling. Here's how I eventually understood the concept myself: The idea with sessions is that when you use one, you're keeping the connection to the server open, so you don't need to create a new connection for each API call. It’s like having an uninterrupted "conversation" with the server, rather than hanging up and redialing for every single message. This can save time and resources, especially when you’re making multiple API calls within the same session.
!PIZZA
Hey felixxx,
I think you might have misunderstood the meme. It’s showing that both people on the low and high ends of the spectrum often stick with simple techniques, like adding print statements, because they actually work. The middle of the curve, though, is where people tend to overcomplicate things by going through function descriptions or stepping through debuggers — which sometimes doesn't reveal the root of the problem. Print statements, though, can give you a direct, clear view of what’s actually going on in the code.
But beyond just print statements, error handling ensures that your script doesn't crash when something goes wrong — like an API failing or invalid data sneaking in. Without it, you're left in the dark, which is why it’s far from being 'lame.' It's what gives you control and the ability to recover when things don't go as expected.
As for infinite loops — they aren’t bad by nature. When you're working with something like continuously reading blockchain data, they can be the best option. The key is to manage them properly with safeguards like rate limiting and exit conditions, so they don’t end up causing performance issues. It’s all about using the right tool for the job.
Looking forward to seeing more of your insights in your upcoming blog!