Vertex-based motion trails implementation using screen-space velocity
6 minute read
Astronomical scenes, especially those rendered interactively, often feature supraluminal camera motion over immense distances. Sometimes, these motions are rendered by applying trail effects to light-emitting objects to enhance the faster-than-light velocity sensation. Gaia Sky will get an implementation of motion trails in the next version (3.6.9). Motion trails are a visual effect that stretches stars, galaxies, and other light-emitting particles in the direction of the velocity vector of the camera, giving a sense of speed and enhancing the perception of motion through space. This technique is inspired by relativistic visualizations and classic star streak effects, but it is grounded in angular motion rather than raw velocity.
Vertex-based stretching of stars in supraluminal travel in Gaia Sky.
In this post, I describe the technical details that made implementing a performant, vertex-based solution into Gaia Sky possible.
This float-128 implementation beats others at same precision
4 minute read
A few days ago I wrote about benchmarking arbitrary precision floating-point libraries in Java. I found out that BigDecimal is not as slow as it is said to be, beating Apfloat at the same precision level by a long margin in most operations. However, for Gaia Sky, I don’t need hundreds of significant digits at all. It turns out 27 significant digits are enough to represent the whole universe with a precision of 1 meter.
Edit (2025-05-08):
Changed some test parameters and re-run the tests. Also added bar plots.
Note: I have since written a new blog which includes Quadruple to the benchmarks, beating both Apfloat and BigDecimal consistently.
I recently set out to compare the performance of Apfloat and BigDecimal for arbitrary precision arithmetic in Java. I use arbitrary precision floating point numbers in key places of the update cycle in Gaia Sky, so it made sense to explore this. My initial approach was a naive benchmark: a simple main() method running arithmetic operations in a loop and measuring the time taken. The results were strongly in favor of BigDecimal, even for large precision values. This was unexpected, as the general consensus I foundonlinesuggested that Apfloat is more performant, especially for higher precision operations (hundreds of digits).
To get more accurate and reliable measurements, I decided to implement a proper JMH benchmark. The benchmark project source is available in this repository. The benchmarks test addition, subtraction, multiplication, division, power, natural logarithm, and sine for both Apfloat and BigDecimal at different precision levels.
Garmin announces new subscription plan for Connect
3 minute read
I’ve been a Garmin user for many years, shelling out non-trivial amounts of monies for their sports watches. My first Garmin watch was a Forerunner 10 (black/red). Battery life was abysmal, but back then this was the norm. Today, I’m sporting a Forerunner 255, which I love. It’s not top-of-the-line, but it is more than enough for my modest purposes. These devices have been my trusty companions on countless runs, football matches, and hikes, providing invaluable data without any hidden costs. But now, Garmin has decided to introduce Garmin Connect+, a subscription service priced at €8.99 per month or €89.99 annually. Really, Garmin?
We use Ollama and Chroma DB to build a personalized assistant from scraped web content
25 minute read
In my last post, I explored the concept of Retrieval-Augmented Generation (RAG) to enable a locally running generative AI model to access and incorporate new information. To achieve this, I used hardcoded documents as context, which were then embedded as vectors and persisted into Chroma DB. These vectors are used during inference to use as context for a local LLM chatbot.
But using a few hardcoded sentences is hardly elegant or particularly exciting. It’s alright for educational purposes, but that’s as far as it goes. However, if we need to build a minimally useful system, we need to be more sophisticated than this. In this new post, I set out to create a local Gaia Sky assistant by using the Gaia Sky documentation site and the Gaia Sky homepage as supplementary information, and leveraging Ollama to generate context-aware responses. So, let’s dive into the topic and explain how it all works.
Let’s build a simple RAG application using a local LLM through Ollama.
11 minute read
Edit (2025-03-26):Added some words about next steps in conclusion.
Edit (2025-03-25):I re-ran the example with a clean database and the results are better. I also cleaned up the code a bit.
Over the past few months I have been running local LLMs on my computer with various results, ranging from ‘unusable’ to ‘pretty good’.
Local LLMs are becoming more powerful, but they don’t inherently “know” everything. They’re trained on massive datasets, but those are typically static. To make LLMs truly useful for specific tasks, you often need to augment them with your own data–data that’s constantly changing, specific to your domain, or not included in the LLM’s original training. The technique known as RAG aims to bridge this problem by embedding context information into a vector database that is later used to provide context to the LLM, so that it can expand its knowledge beyond the original training dataset. In this short article, we’ll see how to build a very primitive local AI chatbot powered by Ollama with RAG capabilities.
The source code used in this post is available here.