I noticed that the data structures I used to define weapons in my twin stick shooter game resembled a procedural language and have been thinking for weeks that I'd make a scripting language for them some day so that I can tweak them without relatively slow recompilations. The other day I realized that I could just reuse the language and VM I've already implemented for defining attack waves for this purpose. My game now runs two instances of the same VM type, one for the wave progression and one for the weapons.
The language is very basic: parameterless subroutines, a single branching construct (repeat <n>), sleep <n> to yield to the caller and pause execution until the given time has passed and a bunch of hard coded keywords that are used to affect the game state like spawning bullets or enemies, set the spawn area, set the current weapon, trigger sound effects or display messages/wave vignettes. Nearly every token in the language corresponds directly to an instruction in the VM, aside from braces which mean slightly different things depending on whether they're used for a loop or a function.
Weapons in this system are plain subroutines (though with an alias for the subroutine keyword for organizational purposes). The player data structure has a pointer to the subroutine that corresponds to their current weapon, and for as long as they hold fire and no subroutine is already running, the VM executes the subroutine to completion.
For example, My definition of a shotgun looks like this:
This plays the shotgun firing sound effect, fires 12 "redbullet" projectiles at an angle offset of -0.15 to 0.15 and at a speed of 350 to 450 pixels per second, sleeps for 0.3 seconds before playing the shotgun cocking sound effect and then another 0.4 seconds before the player can fire again.
The effect on code was nice. It removed a whole bunch of complexity by getting rid of the previous weapon system altogether, and the compiler and VM changes were pretty small, so overall a net negative LOC change in the engine code, while also leaving weapon definitions much shorter because the previous weapon system lacked a looping construct. I anticipate that this will only get better with more complex weapons that could benefit from being able to call subroutines. The result is also much more flexible, because I don't strictly need to use the keywords I added just for weapons in a weapon definition. The weapon definitions now can cause any of the game state effects the wave definitions can.
I have successfully gotten ES8312 to work and sample an infrared phototransistor. Since it has 30 dB PGA and 24 bit depth, the results are quite encouraging.
Next I am having some infrared beacons made. A set of four placed outside corners of a screen will hopefully do the trick.
The goal is to estimate where on a screen surrounded by the beacons is the phototransistor pointing to.
I will be using Goertzel algorithm to extract amplitudes of four different frequencies from the ES8312 stream. One for every corner.
I am a bit worried about the math. Worst case it will work when I am facing the screen. Best case I will be able to devise a set of equations to solve in order for this to work from an arbitrary angle.
Some might ask why not use a camera. Well, this is way more fun. Also a lot cheaper. And definitely better (to me) than painting a weird white rectangle around the screen and tracking that.
I've made a very simple blog for fun. I'd link to the post where I wrote about how I did things, but the site is still... A bit scuffed in terms of features, and I haven't actually made posts have their own pages yet. But there's only 2 for now, so just scrolling is still an option. I am hoping to find time to implement some things soon to make it a bit more functional
I'm continuing to work on this cross-platform app for programming LÖVE apps on both keyboard and touchscreen devices. In the last week I improved its graphics isolation (keeping programs you write from interfering with the IDE; it's all happening in a single app to keep iOS happy). And a menu option to manually rotate the device (because LÖVE doesn't seem to respond to rotate events on Android). And scrollbars. Man, scrollbars are hard.
I've also been building fun/silly little apps for my kids on it. Like this.
i am currently building a custom wordpress theme for a pet project i will eventually use in my portfolio.
i have occasionally checked a website about a mildly obscure musician since at least 2004 and the website is still be run as if it were 1998 (vanilla html (4?) and updated by hand). i’m reworking it into a modern wordpress website with all of the existing content. once i finish, i'm going to then create an instructional video on how to use and maintain the website.
i'm going to offer both to the website owner if they want it, for free. i’ve gotten so much joy from their website, it feels like the least i could do. even if they don’t want it, it is nice to have a project that’s not just dummy text or something built from tutorial examples.
I posted a few months ago about a visualization program for music I was working on.
Well it turns out the FFT code I used was returning absolute garbage so I need to replace it.
It's my fault that I relied on ChatGPT for it because I really struggle to understand the math behind Fourier transforms.
I resumed reading "The Scientist and Engineer's Guide to Digital Signal Processing" by Steven W. Smith, in the hopes that it will walk me through all the math and make me understand.
Besides that, I feel the urge to make a Pokemon Red clone. I started playing it and I'm 3 badges in, and for some reason it hit me that making a similar game would tickle all three of my interests - programming, pixel art and electronic music.
I do want to avoid starting new projects when I have unfinished one in the works though, but maybe this time I can manage 2 projects in tandem? One is basically stuck in math limbo, so while I read through that book, I could draw some fakemons and animate them at least? It could give me a chance to try out Bevy, it's a Rust game engine I've been eyeing for some time.
I'm currently working on the concept of "creating a hardware product" except packaging and financials.
So basically:
Planning requirements
Tinkering on breadboard
Refining the circuit
Designing a proper PCB
Buy some manufactured PCBs
Solder initial prototype
Write dedicated software
Create a 3d printable enclosure
I'm a software guy and I have very limited experience with hardware of any kind.
I have worked with Arduino, ESPs, Teensy and alike before and I know generic electrical and microcontroller concepts, but overall it's pretty limited so I wanted to learn something new.
For now I chose a simple project: a fan and water pump controller that is configurable via USB + software.
I definitely want to make it open source, so the schematic/3D model/PCB design/code/usb protocol is intended to be open.
It must be simple enough for beginners to recreate, and cheap enough to be affordable, even for school kids.
That's why I'm trying to rather use THT components instead of SMDs to make it easier to solder, and use a Raspberry Pico because it has a lot of hardware PWM pins, is fast, is super cheap and allows to register itself as a dedicated HID device which I can use to allow its configuration.
The controller must allow the attachment of external thermistors that you can use as a target for a fan curve, additionally you should be able to simply set a static percentage value.
A dedicated lightweight software should be able to fully control and update it.
Honestly, understanding the USB hardware descriptor documentation and handling of the reports was by far the most difficult part of it yet.
Initially I tried micropython but was not very fond of it. Then I tried TinyGo but I was simply unable to make the USB connection work properly.
Now I just use C++ and deal with it. I use the arduino-pico library instead of the official pico-arm lib, because it's more widely known.
The HID connection now works fine and I can send data back and forth! :)
Doing the PWM part of the fan and water cooling pump was the easiest and that works very well already.
For the serialization I have used protobuf for now, although I'm currently thinking about switching back to manual byte encoding as it brings another dependency into it for a rather simple use case.
I've been trying to replace the ReMarkable e-ink tablet's stack with my own, open stack - there are plenty of options to run a desktop on an e-ink tablet, but if you want to actually draw on it then you're generally stuck with Xournal++, which seems to have more buttons than LibreOffice for some reason, basically none of which are named. So I installed Parabola-RM on my RM tablet, and I've written a stylus-drawing app in PyQt using QtQuick.
Problem is, QtQuick doesn't support stylus input - tablet events (which are nessecary for accessing pressure and tilt) aren't passed through, due to some several-year-old bug that's only just now getting merged for Qt6.6. On the plus side, the one advantage of Parabola is that the packages should be updated fairly quickly, once 6.6 is released.
I still have stylus-as-mouse input, so it's not that big of a deal and I should probably focus on other stuff like making it take less than 10 seconds to render on the tablet. I suspect that's due to it being GPU-accelerated on a device that doesn't have a GPU (it renders normally on my laptop), but there's probably some O(N^2) stuff so I should probably profile it anyway. I was sort of hoping to run it on the PineNote, which does have a GPU, but I wrote this app ("app", lol) a year and a half ago and the PineNote still isn't ready for the general public.
I have some vague plans for how this thing will be better than the ReMarkable's default stack, but they're all really far away. Like, the RM forces you to tap out names for new notebooks on a virtual keyboard, when there could just be a lasso tool to select the title to be OCR'd/HWR'd. This requires a functioning HWR library, which doesn't exist in the open-source world. I could hack up an OCR lib like Tesseract, but they're both less-accurate and more processing-intensive, and the RM has an (IIRC) 800MHz ARM processor with no GPU.
HWR is different from normal OCR in that you pass in more than just raw bitmap data, which improves accuracy - for instance, if you wrote the letter "e" but the top two lines are overlapping, then the resulting bitmap might be basically impossible to distinguish from a bitmap of a "c" letter. HWR can solve this as its vector input gives you the stroke origin point which makes it basically trivial to detect - if the stroke started on the left then looped around it's an e, and if it started on the right and is one near-circular anticlockwise stroke then it's a c. This is a very fascinating topic that I hope someone else will tackle.
I also would like to have the device auto-sync and also integrate with my desktop when I want to take notes, but sadly Parabola does not endorse having functional WiFi, due to being FSF-approved. I could probably compile it with the WiFi, but 1) setting up Parabola-RM was a nightmare that took a month and still isn't quite working, and 2) the Parabola-RM manual recommends using the RM toolchain to compile the kernel, and I basically started this project so I wouldn't have to think about the RM company or their toolchain or OS.
[a] Several weeks ago I built my first NAS server. It uses a pair of 2-GB drives set up as RAID 1. I was floored to see transfer rates as high as 92 MB/second from one of the Linux clients to the
Linux NAS server!
[b] Once I completed the NAS server, I needed a way to back it up, as it already holds 35 GB and I'm nowhere near having it completely mirror the contents of the three Linux clients. I'll be backing up the NAS server to one of four USB-connected 2 GB drives hosting an ext4 file system. The next step is to create a schedule in which an external drive is the target of a weekly NAS server backup, then once a month move the most recent backup to a secure off-site location.
[c] Earlier this year I bought a tool to crimp connectors onto cat 5/6 cable. It works well but it's painful to keep eight tiny wires in place in the connector prior to using the crimping tool; the
wires want to wiggle and squirm and reform into the original wire twist.
Nothing too major but earlier this afternoon I dabbled a bit with high availability in Proxmox. I set up three proxmox VM on my proxmox host, set up a cluster with these three VMs then spun up a bunch of VMs on those virtual hosts and it all just worked.
(1) RSS Reader: now that I am queuing up social media submission for real and developing some really long queues the mechanism I have for changing the order is inconvenient, right now I have buttons to bump an item up or down one step in the queue, I really need buttons that move things up or down one place and the quick way to improve things is to add "top" and "bottom" as well as "-5" and "+5"; I'm still making little changes like that here and there as opposed to doing any major development.
(2) Three-sided cards. I've got an art project that I'm procrastinating on but I made some anime cards just to keep the printer from drying out, as a consequence I made the first anime "I card", that is "individual card" where each individual card in a series has its own unique QR code
right now the system puts in redirects so all the cards of the same design point to the same page but later I can add services that take advantage of this, one idea I have is something that's a bit of a parody of an NFT where somebody who has a card can register it to their email address.
(3) Blog. I picked out a theme for Pelican that I can live with, added a plugin for typesetting math, and did a lot of the DNS and cloud setup to publish the blog. Still gotta set up the CDN and finish the first blog post which is about the design rules for QR codes used in "I cards" (see (2)) to maximize the number of possible identifiers in a "version 2" QR code that is almost as simple as a QR code can be.
After a summer largely taken off from working on the project, I'm back to working on komorebi and whkd.
This week I started working on making both the window manager and the hotkey daemon more portable by allowing users to specify configuration file locations using flags (so that everything can be on a USB drive instead of in the system config paths), and I'm going to carry on integrating this so that AutoHotKey users can also do the same thing for their window manager key bindings as well.
After Google Music fell over for YouTube Music, Google gave me a small window to download all my songs I have purchased with them. However they didn't come with proper metadata, just the files with the song title.
After having tinkered with Python over the past month, I put together a small script that goes through a folder of .mp3 files, search the title of the song and pull the artist and album information and write it as metadata.
Spotify API has a limit of 100 requests every 30 seconds and Genius API only accounts for artist and lyrics.
After messing around with those, I discovered YouTube Music API (which should've been where I started first) was able to pull from their search results.
After running this through couple of songs, I noticed some songs had incorrect metadata. I had not accounted for other songs with the same name as the song title I had searched.
I do not have a solution for this problem, I may just let it run and fix any anomalies manually.
Work has a small automation division for me to mostly deal with nuisances with paperwork. Currently using UIPath to automate timesheets. Downside is instead of a straightforward spreadsheet, there's a lot of calls being made when you select a job title. I'll figure out a way to work around it; maybe lots of delays.
welp, guess a microblog here may help give me some motivation:
I'm working on a "small" rendereder project in Vulkan. my main milestone for it will be to more or less be able to port some animation samples from this mini-series, and profile my implementation hoping for better performance. Main divergance (outside of OpenGL vs. Vulkan) is that I'll load the model via GLTF instead of as an md5mesh. It's a project that knocks out many birds at once so I want to try it out while I still have time.
In theory, this should be something I crack out in a few weeks (as I've made renderers and worked in the graphics pipeline under OpenGL), but as mentioned above, motivation can be a bit low. So as is I last touched my code 2 months ago and was maybe 70% of the way to a triangle. I want to at least show off a triangle by the next post,.
I noticed that the data structures I used to define weapons in my twin stick shooter game resembled a procedural language and have been thinking for weeks that I'd make a scripting language for them some day so that I can tweak them without relatively slow recompilations. The other day I realized that I could just reuse the language and VM I've already implemented for defining attack waves for this purpose. My game now runs two instances of the same VM type, one for the wave progression and one for the weapons.
The language is very basic: parameterless subroutines, a single branching construct (
repeat <n>
),sleep <n>
to yield to the caller and pause execution until the given time has passed and a bunch of hard coded keywords that are used to affect the game state like spawning bullets or enemies, set the spawn area, set the current weapon, trigger sound effects or display messages/wave vignettes. Nearly every token in the language corresponds directly to an instruction in the VM, aside from braces which mean slightly different things depending on whether they're used for a loop or a function.Weapons in this system are plain subroutines (though with an alias for the subroutine keyword for organizational purposes). The player data structure has a pointer to the subroutine that corresponds to their current weapon, and for as long as they hold fire and no subroutine is already running, the VM executes the subroutine to completion.
For example, My definition of a shotgun looks like this:
This plays the shotgun firing sound effect, fires 12 "redbullet" projectiles at an angle offset of -0.15 to 0.15 and at a speed of 350 to 450 pixels per second, sleeps for 0.3 seconds before playing the shotgun cocking sound effect and then another 0.4 seconds before the player can fire again.
The effect on code was nice. It removed a whole bunch of complexity by getting rid of the previous weapon system altogether, and the compiler and VM changes were pretty small, so overall a net negative LOC change in the engine code, while also leaving weapon definitions much shorter because the previous weapon system lacked a looping construct. I anticipate that this will only get better with more complex weapons that could benefit from being able to call subroutines. The result is also much more flexible, because I don't strictly need to use the keywords I added just for weapons in a weapon definition. The weapon definitions now can cause any of the game state effects the wave definitions can.
I have successfully gotten ES8312 to work and sample an infrared phototransistor. Since it has 30 dB PGA and 24 bit depth, the results are quite encouraging.
Next I am having some infrared beacons made. A set of four placed outside corners of a screen will hopefully do the trick.
The goal is to estimate where on a screen surrounded by the beacons is the phototransistor pointing to.
I will be using Goertzel algorithm to extract amplitudes of four different frequencies from the ES8312 stream. One for every corner.
I am a bit worried about the math. Worst case it will work when I am facing the screen. Best case I will be able to devise a set of equations to solve in order for this to work from an arbitrary angle.
Some might ask why not use a camera. Well, this is way more fun. Also a lot cheaper. And definitely better (to me) than painting a weird white rectangle around the screen and tracking that.
I've made a very simple blog for fun. I'd link to the post where I wrote about how I did things, but the site is still... A bit scuffed in terms of features, and I haven't actually made posts have their own pages yet. But there's only 2 for now, so just scrolling is still an option. I am hoping to find time to implement some things soon to make it a bit more functional
I'm continuing to work on this cross-platform app for programming LÖVE apps on both keyboard and touchscreen devices. In the last week I improved its graphics isolation (keeping programs you write from interfering with the IDE; it's all happening in a single app to keep iOS happy). And a menu option to manually rotate the device (because LÖVE doesn't seem to respond to rotate events on Android). And scrollbars. Man, scrollbars are hard.
I've also been building fun/silly little apps for my kids on it. Like this.
i am currently building a custom wordpress theme for a pet project i will eventually use in my portfolio.
i have occasionally checked a website about a mildly obscure musician since at least 2004 and the website is still be run as if it were 1998 (vanilla html (4?) and updated by hand). i’m reworking it into a modern wordpress website with all of the existing content. once i finish, i'm going to then create an instructional video on how to use and maintain the website.
i'm going to offer both to the website owner if they want it, for free. i’ve gotten so much joy from their website, it feels like the least i could do. even if they don’t want it, it is nice to have a project that’s not just dummy text or something built from tutorial examples.
I posted a few months ago about a visualization program for music I was working on.
Well it turns out the FFT code I used was returning absolute garbage so I need to replace it.
It's my fault that I relied on ChatGPT for it because I really struggle to understand the math behind Fourier transforms.
I resumed reading "The Scientist and Engineer's Guide to Digital Signal Processing" by Steven W. Smith, in the hopes that it will walk me through all the math and make me understand.
Besides that, I feel the urge to make a Pokemon Red clone. I started playing it and I'm 3 badges in, and for some reason it hit me that making a similar game would tickle all three of my interests - programming, pixel art and electronic music.
I do want to avoid starting new projects when I have unfinished one in the works though, but maybe this time I can manage 2 projects in tandem? One is basically stuck in math limbo, so while I read through that book, I could draw some fakemons and animate them at least? It could give me a chance to try out Bevy, it's a Rust game engine I've been eyeing for some time.
I'm currently working on the concept of "creating a hardware product" except packaging and financials.
So basically:
I'm a software guy and I have very limited experience with hardware of any kind.
I have worked with Arduino, ESPs, Teensy and alike before and I know generic electrical and microcontroller concepts, but overall it's pretty limited so I wanted to learn something new.
For now I chose a simple project: a fan and water pump controller that is configurable via USB + software.
I definitely want to make it open source, so the schematic/3D model/PCB design/code/usb protocol is intended to be open.
It must be simple enough for beginners to recreate, and cheap enough to be affordable, even for school kids.
That's why I'm trying to rather use THT components instead of SMDs to make it easier to solder, and use a Raspberry Pico because it has a lot of hardware PWM pins, is fast, is super cheap and allows to register itself as a dedicated HID device which I can use to allow its configuration.
The controller must allow the attachment of external thermistors that you can use as a target for a fan curve, additionally you should be able to simply set a static percentage value.
A dedicated lightweight software should be able to fully control and update it.
Honestly, understanding the USB hardware descriptor documentation and handling of the reports was by far the most difficult part of it yet.
Initially I tried micropython but was not very fond of it. Then I tried TinyGo but I was simply unable to make the USB connection work properly.
Now I just use C++ and deal with it. I use the arduino-pico library instead of the official pico-arm lib, because it's more widely known.
The HID connection now works fine and I can send data back and forth! :)
Doing the PWM part of the fan and water cooling pump was the easiest and that works very well already.
For the serialization I have used protobuf for now, although I'm currently thinking about switching back to manual byte encoding as it brings another dependency into it for a rather simple use case.
I've been trying to replace the ReMarkable e-ink tablet's stack with my own, open stack - there are plenty of options to run a desktop on an e-ink tablet, but if you want to actually draw on it then you're generally stuck with Xournal++, which seems to have more buttons than LibreOffice for some reason, basically none of which are named. So I installed Parabola-RM on my RM tablet, and I've written a stylus-drawing app in PyQt using QtQuick.
Problem is, QtQuick doesn't support stylus input - tablet events (which are nessecary for accessing pressure and tilt) aren't passed through, due to some several-year-old bug that's only just now getting merged for Qt6.6. On the plus side, the one advantage of Parabola is that the packages should be updated fairly quickly, once 6.6 is released.
I still have stylus-as-mouse input, so it's not that big of a deal and I should probably focus on other stuff like making it take less than 10 seconds to render on the tablet. I suspect that's due to it being GPU-accelerated on a device that doesn't have a GPU (it renders normally on my laptop), but there's probably some O(N^2) stuff so I should probably profile it anyway. I was sort of hoping to run it on the PineNote, which does have a GPU, but I wrote this app ("app", lol) a year and a half ago and the PineNote still isn't ready for the general public.
I have some vague plans for how this thing will be better than the ReMarkable's default stack, but they're all really far away. Like, the RM forces you to tap out names for new notebooks on a virtual keyboard, when there could just be a lasso tool to select the title to be OCR'd/HWR'd. This requires a functioning HWR library, which doesn't exist in the open-source world. I could hack up an OCR lib like Tesseract, but they're both less-accurate and more processing-intensive, and the RM has an (IIRC) 800MHz ARM processor with no GPU.
HWR is different from normal OCR in that you pass in more than just raw bitmap data, which improves accuracy - for instance, if you wrote the letter "e" but the top two lines are overlapping, then the resulting bitmap might be basically impossible to distinguish from a bitmap of a "c" letter. HWR can solve this as its vector input gives you the stroke origin point which makes it basically trivial to detect - if the stroke started on the left then looped around it's an e, and if it started on the right and is one near-circular anticlockwise stroke then it's a c. This is a very fascinating topic that I hope someone else will tackle.
I also would like to have the device auto-sync and also integrate with my desktop when I want to take notes, but sadly Parabola does not endorse having functional WiFi, due to being FSF-approved. I could probably compile it with the WiFi, but 1) setting up Parabola-RM was a nightmare that took a month and still isn't quite working, and 2) the Parabola-RM manual recommends using the RM toolchain to compile the kernel, and I basically started this project so I wouldn't have to think about the RM company or their toolchain or OS.
[a] Several weeks ago I built my first NAS server. It uses a pair of 2-GB drives set up as RAID 1. I was floored to see transfer rates as high as 92 MB/second from one of the Linux clients to the
Linux NAS server!
[b] Once I completed the NAS server, I needed a way to back it up, as it already holds 35 GB and I'm nowhere near having it completely mirror the contents of the three Linux clients. I'll be backing up the NAS server to one of four USB-connected 2 GB drives hosting an ext4 file system. The next step is to create a schedule in which an external drive is the target of a weekly NAS server backup, then once a month move the most recent backup to a secure off-site location.
[c] Earlier this year I bought a tool to crimp connectors onto cat 5/6 cable. It works well but it's painful to keep eight tiny wires in place in the connector prior to using the crimping tool; the
wires want to wiggle and squirm and reform into the original wire twist.
https://tildes.net/~life.home_improvement/19bg/advice_on_setting_up_home_ethernet_with_unused_cable_already_in_the_walls introduced me to "pass through connectors" which turn connector chores into a piece of cake! When I bought the pass through connectors, I also bought a cable jacket stripper and a cable tester. Both are useful and inexpensive tools.
Nothing too major but earlier this afternoon I dabbled a bit with high availability in Proxmox. I set up three proxmox VM on my proxmox host, set up a cluster with these three VMs then spun up a bunch of VMs on those virtual hosts and it all just worked.
(1) RSS Reader: now that I am queuing up social media submission for real and developing some really long queues the mechanism I have for changing the order is inconvenient, right now I have buttons to bump an item up or down one step in the queue, I really need buttons that move things up or down one place and the quick way to improve things is to add "top" and "bottom" as well as "-5" and "+5"; I'm still making little changes like that here and there as opposed to doing any major development.
(2) Three-sided cards. I've got an art project that I'm procrastinating on but I made some anime cards just to keep the printer from drying out, as a consequence I made the first anime "I card", that is "individual card" where each individual card in a series has its own unique QR code
https://mastodon.social/@UP8/111013706271196029
right now the system puts in redirects so all the cards of the same design point to the same page but later I can add services that take advantage of this, one idea I have is something that's a bit of a parody of an NFT where somebody who has a card can register it to their email address.
(3) Blog. I picked out a theme for Pelican that I can live with, added a plugin for typesetting math, and did a lot of the DNS and cloud setup to publish the blog. Still gotta set up the CDN and finish the first blog post which is about the design rules for QR codes used in "I cards" (see (2)) to maximize the number of possible identifiers in a "version 2" QR code that is almost as simple as a QR code can be.
After a summer largely taken off from working on the project, I'm back to working on komorebi and whkd.
This week I started working on making both the window manager and the hotkey daemon more portable by allowing users to specify configuration file locations using flags (so that everything can be on a USB drive instead of in the system config paths), and I'm going to carry on integrating this so that AutoHotKey users can also do the same thing for their window manager key bindings as well.
After Google Music fell over for YouTube Music, Google gave me a small window to download all my songs I have purchased with them. However they didn't come with proper metadata, just the files with the song title.
After having tinkered with Python over the past month, I put together a small script that goes through a folder of .mp3 files, search the title of the song and pull the artist and album information and write it as metadata.
Spotify API has a limit of 100 requests every 30 seconds and Genius API only accounts for artist and lyrics.
After messing around with those, I discovered YouTube Music API (which should've been where I started first) was able to pull from their search results.
After running this through couple of songs, I noticed some songs had incorrect metadata. I had not accounted for other songs with the same name as the song title I had searched.
I do not have a solution for this problem, I may just let it run and fix any anomalies manually.
Work has a small automation division for me to mostly deal with nuisances with paperwork. Currently using UIPath to automate timesheets. Downside is instead of a straightforward spreadsheet, there's a lot of calls being made when you select a job title. I'll figure out a way to work around it; maybe lots of delays.
welp, guess a microblog here may help give me some motivation:
I'm working on a "small" rendereder project in Vulkan. my main milestone for it will be to more or less be able to port some animation samples from this mini-series, and profile my implementation hoping for better performance. Main divergance (outside of OpenGL vs. Vulkan) is that I'll load the model via GLTF instead of as an md5mesh. It's a project that knocks out many birds at once so I want to try it out while I still have time.
In theory, this should be something I crack out in a few weeks (as I've made renderers and worked in the graphics pipeline under OpenGL), but as mentioned above, motivation can be a bit low. So as is I last touched my code 2 months ago and was maybe 70% of the way to a triangle. I want to at least show off a triangle by the next post,.