<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Next-Gen Tech: Navigating the AI Revolution | Nishchay Kaushik]]></title><description><![CDATA[Exploring the intersection of modern web development and Generative AI. Stay ahead of the curve with insights on LLMs, automation tools, and the latest tech stacks for the AI-first era.]]></description><link>https://blog.nkaushik.in</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 13:43:36 GMT</lastBuildDate><atom:link href="https://blog.nkaushik.in/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Top 4 ways to Run LLM locally on Android and iOS]]></title><description><![CDATA[In my previous blog, I explored the technical rabbit hole of running Llama.cpp in Termux on an old Android phone. While that was a rewarding experiment, let's be honest: it wasn't exactly "plug-and-pl]]></description><link>https://blog.nkaushik.in/top-4-ways-to-run-llm-locally-on-android-and-ios</link><guid isPermaLink="true">https://blog.nkaushik.in/top-4-ways-to-run-llm-locally-on-android-and-ios</guid><category><![CDATA[Local LLM]]></category><category><![CDATA[llm]]></category><category><![CDATA[AI]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[ollama]]></category><category><![CDATA[gemma4]]></category><category><![CDATA[#qwen]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[Android]]></category><category><![CDATA[iOS]]></category><category><![CDATA[SelfHosting]]></category><category><![CDATA[generative ai]]></category><dc:creator><![CDATA[Nishchay Kaushik]]></dc:creator><pubDate>Thu, 02 Apr 2026 17:33:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/46a559f4-6a66-4e87-bbf5-a334133efb5d.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In my previous blog, I explored the technical rabbit hole of running Llama.cpp in Termux on an old Android phone. While that was a rewarding experiment, let's be honest: it wasn't exactly "plug-and-play" for most people.</p>
<p>But it's 2026, and the game has changed. You no longer need to be a Linux wizard to have a private, powerful AI in your pocket. In this guide, I'm covering the <strong>Top 4 ways</strong> you can run LLMs locally on your Android or iOS device—completely for free, with <strong>unlimited requests</strong>, and <strong>zero data leaving your phone</strong>.</p>
<h2>1. Google AI Edge Gallery: The "One-Click" Starter</h2>
<p><a href="https://github.com/google-ai-edge/gallery">https://github.com/google-ai-edge/gallery</a></p>
<p>Google's official showcase is the perfect starting point. It's the easiest way to see what your phone is actually capable of. It's optimized for mobile NPUs, meaning it's fast and efficient for tasks like tone refinement, audio transcription, and image understanding.</p>
<p>The latest version now supports <strong>Gemma 4</strong> and <strong>Gemma 3</strong>, models specifically "distilled" to run on mobile hardware without eating up your RAM.</p>
<p><strong>The "Skills" Update:</strong> One of the coolest additions is support for <strong>SKILLS</strong>. You can now extend what the AI can do by pulling from a vast library at <a href="https://skills.sh/">skills.sh</a>. My personal favorite is the <a href="https://skills.sh/blader/humanizer/humanizer">Humanizer</a> skill—it's great for making AI-generated text sound more, well, human.</p>
<h3>Demo</h3>
<p><a href="https://youtube.com/shorts/1H6z2-K7gvw">https://youtube.com/shorts/1H6z2-K7gvw</a></p>
<h3>Models Available</h3>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/1043f69c-870f-4a48-84fc-405b37014bdc.png" alt="" style="display:block;margin:0 auto" />

<h4>Bundling your own model</h4>
<p>If you're feeling adventurous, you can even bundle your own models by following the <a href="https://huggingface.co/litert-community">LiteRT community guide</a>. It walks you through converting models from Pytorch or other formats.</p>
<hr />
<h2>2. PocketPal AI</h2>
<p><a href="https://github.com/a-ghorbani/pocketpal-ai">PocketPal AI</a> is the one I keep coming back to. It's built on <strong>llama.cpp</strong> (the industry standard), but the interface is actually designed for a human using a phone, not a developer staring at a terminal.</p>
<p>The <strong>'Pals' feature</strong> is the standout here. Instead of one messy chat thread, you can set up different "Pals" with their own system prompts. I have one for coding and another for creative writing. It also handles the latest architectures like <strong>Qwen 3.5</strong> and <strong>LFM 2.5</strong>, which are incredibly fast on mobile.</p>
<p><strong>Why it works for me:</strong></p>
<ul>
<li><p><strong>Easy Model Setup:</strong> You can download GGUF models directly within the app. For a smooth experience, I recommend any GGUF model (try <strong>Unsloth's</strong> uploads) with <strong>Q4_K_M quantization</strong>.</p>
</li>
<li><p><strong>100% Private:</strong> Once the model is on your phone, you can go into Airplane Mode and the AI won't even notice.</p>
</li>
<li><p><strong>Live Stats:</strong> It shows you tokens per second in real-time. It's satisfying to see your phone's hardware in action.</p>
</li>
</ul>
<hr />
<h2>3. AnythingLLM: The Researcher's Workspace</h2>
<p>If you need your AI to actually <em>do</em> something with your files, <a href="https://anythingllm.com/mobile">AnythingLLM</a> is a different beast. It's less of a chatbot and more of a portable workspace.</p>
<p>The killer feature is <strong>On-Device RAG</strong>. You can feed it a PDF or text file sitting on your phone, and the AI will answer questions based <strong>only</strong> on that document. I've used this to summarize 50-page technical docs during flights with zero internet.</p>
<p><strong>What sets it apart:</strong></p>
<ul>
<li><p><strong>Chat with your docs:</strong> Local indexing means your data never touches a server.</p>
</li>
<li><p><strong>Tools &amp; Agents:</strong> It supports web scraping, calendar editing, and the <strong>Model Context Protocol (MCP)</strong>, so it's slowly becoming a true mobile agent.</p>
</li>
<li><p><strong>The "Infinite Power" Trick:</strong> If your phone is struggling, you can connect it to a massive 70B model running on your home PC via API and use your phone as a remote window into that power.</p>
</li>
</ul>
<hr />
<h2>4. Termux + Ollama: The Power User Shortcut</h2>
<p>This is for my Android users who miss the command line. We're using <a href="https://termux.dev/">Termux</a> to run <a href="https://ollama.com/">Ollama</a> directly on the device. It's the closest you'll get to a desktop experience on a mobile screen.</p>
<p><strong>The Quick Setup:</strong></p>
<ol>
<li><p>Install Termux (use the version from F-Droid, not the Play Store).</p>
</li>
<li><p>Run <code>pkg install ollama</code>.</p>
</li>
<li><p>Type <code>ollama run qwen3.5:0.8b</code>.</p>
</li>
</ol>
<p>That's it. Ollama will auto-download the model and drop you into a CLI chat. You can run pretty much anything from the <a href="https://ollama.com/search">Ollama Library</a> as long as your RAM can handle it. It's the fastest way to test new models the second they drop.</p>
<hr />
<h2>Summary: Which one should you pick?</h2>
<table>
<thead>
<tr>
<th>Method</th>
<th>Best For</th>
<th>Ease of Use</th>
<th>Customization</th>
</tr>
</thead>
<tbody><tr>
<td><strong>Google Edge Gallery</strong></td>
<td>Beginners / Multi-Modal</td>
<td>⭐⭐⭐⭐⭐</td>
<td>⭐⭐</td>
</tr>
<tr>
<td><strong>PocketPal AI</strong></td>
<td>Daily Chat / Personalities</td>
<td>⭐⭐⭐⭐</td>
<td>⭐⭐⭐</td>
</tr>
<tr>
<td><strong>AnythingLLM</strong></td>
<td>Working with Documents / RAG</td>
<td>⭐⭐⭐</td>
<td>⭐⭐⭐⭐⭐</td>
</tr>
<tr>
<td><strong>Termux + Ollama</strong></td>
<td>Developers / CLI Lovers</td>
<td>⭐</td>
<td>⭐⭐⭐⭐</td>
</tr>
</tbody></table>
<h2>The era of "Local-First" AI</h2>
<p>The era of relying on expensive subscriptions and cloud-tracking for AI is ending. Whether you want a simple one-click app or a full terminal setup, you can now carry a "God-tier" brain in your pocket for $0.</p>
<p>Read about more on Local AI and Self-Hosting on <a href="https://nkaushik.in?utm_source=top-4-local-ai">nkaushik.in</a>.</p>
<p><strong>Happy (Local) Chatting!</strong></p>
]]></content:encoded></item><item><title><![CDATA[Replace Google Photos: Self‑Host Immich in 2026]]></title><description><![CDATA[If you're anything like me, you must also have 10s of Gigabytes of photos or videos, and with the smartphones cameras getting better and better, we have started capturing more memories then we used to]]></description><link>https://blog.nkaushik.in/replace-google-photos-self-host-immich-in-2026</link><guid isPermaLink="true">https://blog.nkaushik.in/replace-google-photos-self-host-immich-in-2026</guid><category><![CDATA[Homelab]]></category><category><![CDATA[self-hosted]]></category><category><![CDATA[SelfHosting]]></category><category><![CDATA[#selfhosted]]></category><category><![CDATA[Google]]></category><category><![CDATA[Google Photos]]></category><category><![CDATA[privacy]]></category><category><![CDATA[self hosting]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[open source]]></category><category><![CDATA[Docker]]></category><category><![CDATA[tailscale]]></category><category><![CDATA[Immich]]></category><category><![CDATA[cloudflare]]></category><dc:creator><![CDATA[Nishchay Kaushik]]></dc:creator><pubDate>Wed, 25 Mar 2026 19:11:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/d0d202d4-1621-4066-b9a8-c8df16e19d37.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you're anything like me, you must also have 10s of Gigabytes of photos or videos, and with the smartphones cameras getting better and better, we have started capturing more memories then we used to before.</p>
<p>But saving and viewing these memories has started to become costly, and with every tech giant racing to feed our data into AI models., these private memories of ours are also being used to train these new AI models.</p>
<p>I wanted to figure out how could I make these memories of mine more accessible, and actually be able to see them again with ease.</p>
<p>Earlier I used to manually copy and backup my photos to an external backup storage but recently I had my phone die on me which meant I lost all my recent photographs, so It was really important if I could make this backup automatic.</p>
<p>The 2TB plan on Google Photos is going to set you back 100\(/yr, while the same on iCloud will cost you 132\)/yr. But lets be real, how often do we really access these photos, so while having the ability to access them is important, you could optimize how you pay for it all.</p>
<h2>The Best Google Photos Alternatives in 2026</h2>
<p>There are a couple of Open Source options that I came across like, Ente Photos, <strong>Immich</strong> etc.</p>
<p>I went ahead with Immich, because I liked the interface, as well as while searching for the options I found a lot of positive sentiments about Immich in the reddit community.</p>
<h2>Self-Host Immich</h2>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/c7701e9f-19a1-4383-96c8-2288ffa6c394.png" alt="Immich logo" style="display:block;margin:0 auto" />

<blockquote>
<p><a href="https://immich.app/">https://immich.app/</a></p>
<p><em>Self-hosted photo and video management solution</em></p>
</blockquote>
<h2>Features of Immich</h2>
<p>Immich has pretty much all the feature you can think of</p>
<ul>
<li><p>Face Detection</p>
</li>
<li><p>Searching based on text/tags</p>
</li>
<li><p>Location Mapping (one of my favorites ❤️‍🔥)</p>
</li>
<li><p>and a lot more…</p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/9824763a-ca3a-41d1-995f-d0f50017dee6.png" alt="Immich Map with photo markers" style="display:block;margin:0 auto" />

<p>Setting up Immich is quite straightforward, we can set it up using multiple strategies, in this guide I’ll be using docker, as that allows us to fresh-start in case we mess anything up.</p>
<h2>Getting things ready</h2>
<h3>Docker &amp; Compose</h3>
<p>Follow the steps on <a href="https://docs.docker.com/get-started/get-docker/">https://docs.docker.com/get-started/get-docker/</a> to <strong>install docker and docker compose</strong>.</p>
<p>Because we are using docker, the setup is identical across all operating systems..</p>
<h3>Setting things up</h3>
<p>The Immich documentation is an excellent starting point, offering a straightforward, step-by-step guide.</p>
<p>Follow the steps here: <a href="https://docs.immich.app/install/docker-compose">https://docs.immich.app/install/docker-compose</a></p>
<blockquote>
<p>:warning:</p>
<p>While you can have Immich store the photos library on either SSD or HDD, make sure you setup Immich and most importantly the database Immich uses on a SSD, this ensures everything stays snappy.</p>
<p>BUT If you do install Immich database on a HDD make sure to set the environment variable <code>DB_STORAGE_TYPE</code> with the appropriate value <a href="https://docs.immich.app/install/environment-variables#database">https://docs.immich.app/install/environment-variables#database</a></p>
</blockquote>
<p>Once you have completed all the steps you will have your personal Google Photos alternative i.e., Immich running on <code>http://&lt;machine-ip-address&gt;:2283</code> / <code>http://localhost:2283</code></p>
<h2>Adding your existing albums</h2>
<h3>Mounting the directories</h3>
<p>While <strong>Immich</strong> is meant for doing backup from your devices, It doesn’t mean you need to upload your photos manually for loading them into Immich. If you already have your photos and memories in a folder you can point them into Immich and it would load them for you.</p>
<p>Here’s an example of the docker-compose.yml file that I use:</p>
<pre><code class="language-docker">services:
  immich_server:
    container_name: immich_server
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
    ......
    volumes:
      .......
      # Modify below to change external albums path
      - /media/PhotosStorage:/mnt/media/og-photo-albums:ro # Change this to your media storage location
      ######
    .....
    restart: always
</code></pre>
<p>In the above compose file I use <code>:ro</code> to make my external albums location mount in <code>read-only</code> mode, You can change it to <code>:rw</code> to have it in <code>read-write</code> mode which will allow Immich to also delete and edit files.</p>
<p>Once you have added these folders into your docker-compose file, you need to restart your Immich instance,</p>
<p>You can do so by running the below command in the folder you have your Immich’s docker-compose file.</p>
<pre><code class="language-bash">docker compose up -d
</code></pre>
<p>After restart you can monitor the logs by running the command <code>docker compose logs</code> in the same folder, to see if the Immich service has restarted successfully or not after adding your external photo albums.</p>
<h3>Updating Immich to read external Albums</h3>
<p>Now to make sure Immich loads these photos and show it in the UI. You need to configure it in the Admin setting.</p>
<p>Head over to your Immich instance, then from the <code>Settings &gt; Administration &gt; External Libraries</code></p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/3417c74e-7c18-47aa-8472-1bbd35432268.png" alt="Immich administration settings for external libraries" style="display:block;margin:0 auto" />

<p>Click on <code>Create Library</code></p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/5ac69b53-217a-4d39-b5ee-6078837d4e50.png" alt="External library top nav bar" style="display:block;margin:0 auto" />

<p>On newly created External Library Click on <code>+ Add</code> button to add your external albums</p>
<p>The path you add here is from your docker compose file which you previously added.</p>
<p>Example:</p>
<pre><code class="language-bash">- /media/PhotosStorage:/mnt/media/og-photo-albums
</code></pre>
<p>Insert the path value you see on the right side of the <code>:</code>  i.e, in this case it would be <code>/mnt/media/og-photo-albums</code></p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/a9ad6281-8555-42ce-9b3e-503b363a364c.png" alt="Immich administration external library folders config" style="display:block;margin:0 auto" />

<h2>Triggering Jobs</h2>
<p>Once you have added your external album, you can hit <code>Scan</code> to have Immich read all your photos and videos.</p>
<p>You can also manually trigger the jobs to generate thumbnails, doing duplicate detection, face detection etc by going to the <code>Job Queues</code> page</p>
<p>When you run it for the first time these jobs might take time depending upon the size and number of file in your folder, <strong>so be patient</strong> and let it run through.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/8e6c667b-4ed8-4600-8aea-c857f1b04733.png" alt="Immich Administration Job Queues page" style="display:block;margin:0 auto" />

<p>You can check the status of what jobs are running and how much work is pending on the same page.</p>
<p>Once everything has finished, your external album is ready to use <strong>🎉</strong></p>
<h2>Making it Public and Secure</h2>
<p>To truly have Immich as a backup option, it is important that we can access it outside our home and make it public so that we can also do backups.</p>
<p>There are 2 ways to go about it:</p>
<ol>
<li><p>Cloudflare Tunnel</p>
</li>
<li><p>Tailscale</p>
</li>
</ol>
<p>Both of these methods have their own benefits and downsides, let me quickly cover those;</p>
<p><strong>Cloudflare</strong></p>
<ul>
<li><p>You must own a domain</p>
</li>
<li><p>100MB Limits on file uploads under free plan (problem for videos)</p>
</li>
<li><p>Anyone can access without creating Cloudflare account</p>
</li>
</ul>
<p><strong>Tailscale</strong></p>
<ul>
<li><p>No domain required</p>
</li>
<li><p>VPN required to be connected always</p>
</li>
<li><p>Need to invite users to Tailscale to access Immich</p>
</li>
<li><p>No file upload limits</p>
</li>
</ul>
<p>Depending upon which one you prefer you can follow either of these setup:</p>
<h3>Cloudflare setup</h3>
<ol>
<li><p>Create a Cloudflare tunnel <a href="https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/get-started/">https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/get-started/</a></p>
</li>
<li><p>In the “Publish an application” step you would publish Immich which is running on <code>localhost:2283</code></p>
</li>
<li><p>Once your tunnel is up, you can now access your Immich over the subdomain you created. eg., <code>immich.domain.com</code></p>
</li>
<li><p>Open the above URL to verify if your Immich is running and is accessible and you land on the login page.</p>
</li>
</ol>
<p>If you are able to access Immich, you can now use this URL in the Immich app.</p>
<p>Next, We can now also make this a bit more secure, such that not everyone can access even the login page of Immich, this would prevent unauthorized access to your Immich instance.</p>
<p>To do that, we will setup “Access Control”</p>
<p>Head over to <a href="https://developers.cloudflare.com/cloudflare-one/access-controls/applications/http-apps/self-hosted-public-app/">https://developers.cloudflare.com/cloudflare-one/access-controls/applications/http-apps/self-hosted-public-app/</a> and setup a new application, while setting up the policies you can even control who can access this application of yours and filter based on the emails. When you setup the application now when you access the URL it would land on this type of login page.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/a6b2bddc-36f4-4ad6-b40f-dfc884185382.png" alt="Cloudflare Access login page" style="display:block;margin:0 auto" />

<p>Anyone can enter their email and request the code to get access, you can limit the emails that can do so, essentially blocking anyone else from even getting access.</p>
<p>Now, once we enable access control, it would prevent the Immich app from opening the URL.</p>
<p>To solve for it, we would add another “Policy” to the app called “Service Auth” which would be used in Immich app.</p>
<p>Follow the step on <a href="https://developers.cloudflare.com/cloudflare-one/access-controls/service-credentials/service-tokens/">https://developers.cloudflare.com/cloudflare-one/access-controls/service-credentials/service-tokens/</a> and create a new Service Token.</p>
<p>You should now have two values, keep them safe and save it securely.</p>
<pre><code class="language-bash">CF-Access-Client-Id: &lt;CLIENT_ID&gt;

CF-Access-Client-Secret: &lt;CLIENT_SECRET&gt;
</code></pre>
<p>Now, go back to your “Application” you created in the previous step and “Policies” tab.</p>
<p>Click “Select existing policies” and pick the newly created “Service Auth”.</p>
<p>Now, your Immich app will have two ways in which users can “access” it 1. Using the 1-time code by providing email, 2. By using the secret headers “CF-Access-Client-Id” and “CF-Access-Client-Secret” (which would be primarily for Immich app)</p>
<p>Now, To configure the Immich app</p>
<ul>
<li><p>Go to settings</p>
</li>
<li><p>Advanced</p>
</li>
<li><p>Custom proxy headers</p>
</li>
<li><p>Add the values you saved previously</p>
</li>
</ul>
<p>Now it should look like something below.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/ad7a7ec5-5a00-449e-ae4f-7bd0e3a15cd0.png" alt="Immich app settings custom proxy headers" style="display:block;margin:0 auto" />

<p>You can now add the URL in your “Networking” section of the Immich app settings.</p>
<p>You should see a green check-mark :white_check_mark: next to it if the app is able to access it.</p>
<p>Congrats your Immich app is now ready. 🎉</p>
<h3>Tailscale setup</h3>
<p>Go to <a href="https://login.tailscale.com/welcome">https://login.tailscale.com/welcome</a> and create a new account</p>
<p>Then “Add device” and install tailscale on your system.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/fdd17370-77e8-4b6b-a528-c9787d2e2929.png" alt="Taiscale new device menu" style="display:block;margin:0 auto" />

<p>Now your system should show up in the “Machines” tab.</p>
<p>Now go to “Services” tab and “Define a service”</p>
<p>Enter the name of the service and use port as ‘443’</p>
<p>It should look something like this:</p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/d31b8197-9896-4fa7-a205-ed5e3034bc8d.png" alt="" style="display:block;margin:0 auto" />

<p>After defining the service Tailscale should give you a command to run.</p>
<p>It might look like</p>
<pre><code class="language-bash">tailscale serve --service=svc:immich --https=443 127.0.0.1:2283
</code></pre>
<p>run it in terminal on your system, and then accept the service on the Services tab in tailscale.</p>
<p>Once you finish this you should see “**🟢**Online” next to the service name.</p>
<p>Now connect to tailscale VPN using the client and try opening the URL you see next to the service.</p>
<p>You should now be able to access the Immich app over tailscale, remember this URL will only work if you are connected over Tailscale VPN.</p>
<p>You can also add this URL in the Immich app under the “Networking” section in Immich setting.</p>
<p>Once you see the green check-mark <strong>✅</strong> next to your tailscale URL in Immich app, you are done!</p>
<p>Your Immich app is now remotely accessible.🎉</p>
<h2>Ending thoughts</h2>
<p>Running your own Google Photos alternative like Immich lets you take back control of your memories — cutting recurring cloud costs, improving privacy, and automating backups so you don’t lose recent photos again.</p>
<p>It does require some upfront effort (hosting, storage planning, and occasional maintenance), but the tradeoffs are worth it if you value long-term accessibility and ownership of your data.</p>
<p>Start small: test with a subset of your library, automate uploads from your phone, and keep an offsite or secondary backup for redundancy.</p>
<p>With the active open-source community and clear docs, self-hosting is a practical, cost-effective option for anyone ready to invest a little time for greater control.</p>
<h2>Whats Next?</h2>
<p>I am writing about all the various service I have been self-hosting.</p>
<p>Read my other posts on Self-Hosting on:</p>
<p><a href="https://nkaushik.in/writing/series/homelab/">https://nkaushik.in/writing/series/homelab/</a></p>
<p><a href="https://nkaushik.in/writing/getting-started-with-self-hosting-in-2026">https://nkaushik.in/writing/getting-started-with-self-hosting-in-2026</a></p>
<p><a href="https://nkaushik.in/writing/adguard-home-for-families-how-to-stop-ads-and-trackers-without-installing-extensions/">https://nkaushik.in/writing/adguard-home-for-families-how-to-stop-ads-and-trackers-without-installing-extensions/</a></p>
]]></content:encoded></item><item><title><![CDATA[Getting Started with Self-Hosting in 2026]]></title><description><![CDATA[TL;DR:
Start your self-hosting journey in 2026 with affordable hardware, stable software like Debian, and containerization via Docker.
This guide is for anyone who want to take control of their data, ]]></description><link>https://blog.nkaushik.in/getting-started-with-self-hosting-in-2026</link><guid isPermaLink="true">https://blog.nkaushik.in/getting-started-with-self-hosting-in-2026</guid><category><![CDATA[beginner homelab guide]]></category><category><![CDATA[home]]></category><category><![CDATA[self-hosted]]></category><category><![CDATA[Immich]]></category><category><![CDATA[guide]]></category><category><![CDATA[Beginner Developers]]></category><category><![CDATA[Homelab]]></category><category><![CDATA[#selfhosted]]></category><category><![CDATA[SelfHosting]]></category><category><![CDATA[self hosting]]></category><category><![CDATA[home lab]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Docker compose]]></category><category><![CDATA[debian]]></category><category><![CDATA[guide2026]]></category><category><![CDATA[lmmich]]></category><category><![CDATA[adguardhome]]></category><dc:creator><![CDATA[Nishchay Kaushik]]></dc:creator><pubDate>Mon, 23 Mar 2026 17:15:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/a16659ff-2682-4463-a591-b2048c73779d.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TL;DR:</strong></p>
<p>Start your self-hosting journey in 2026 with affordable hardware, stable software like Debian, and containerization via Docker.</p>
<p>This guide is for anyone who want to take control of their data, starting with family photos and expanding to network-wide services. No advanced Linux knowledge is required.</p>
<h2>Introduction</h2>
<p>Do you also have 100s of GB of Photos and Videos of you and your family but you rarely see them?</p>
<p>That bothered me too. Finding a solution started my self-hosting journey.</p>
<p>While solutions like Google Photos existed the idea of having to upload all that data and then having to pay a subscription price for something that will not be accessed that frequently did not feel right.</p>
<p>I wanted something which could just reduce the friction of accessing my library and that's when I came across this great project known as <a href="https://immich.app/">Immich</a><strong>🔗</strong>.</p>
<p>What started as running a single self-hosted service, soon exploded, and now I run <em><strong>30+</strong></em> services self-hosted and accessible even publicly.</p>
<blockquote>
<p><mark class="bg-yellow-200 dark:bg-yellow-500/30">In this blog I'll go over how you can start your Self-Hosting Journey and How to choose the "right hardware" for your needs.</mark></p>
</blockquote>
<h2><strong>Hardware Overview</strong></h2>
<p>I started off by running Immich in a docker container on my Legion 5i Laptop with 16GB RAM, i7-14650HX, NVIDIA 4060 GPU, this allowed me to test things out first.</p>
<p>Once I was able to setup everything and have it run and felt it would work for my use-case, I started looking into hardware options, as running a big clunky laptop 24x7 was NOT an effective solution.</p>
<p>I debated buying a Raspberry Pi 5, but the price in India was high and combining it with its various HATs increased significantly and it became less of an option for me. Another limiting factor of Pi 5 was its future scalability as I was worried the Pi's hardware limitations would eventually prevent me from expanding into more complex services.</p>
<p>Ultimately, I settled and bought an old mini PC. I bought a refurbished HP Elitedesk 800 G2 mini for ₹11,500.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/5e1bfc42-05cb-4c1e-9e1f-ba1f40ef784b.png" alt="" style="display:block;margin:0 auto" />

<p>The configuration I currently have for my mini PC are:</p>
<blockquote>
<p><em>CPU:</em> i5-6500T</p>
<p>RAM: 8GB DDR4</p>
<p>Storage: 256 GB NVME SSD, 256GB Internal Sata SSD</p>
<p><em>External Storage:</em> 500GB HDD connected over USB</p>
<p>Networking: 1Gbps Ethernet</p>
</blockquote>
<p>I have my server connected to my router via an Cat 5e Ethernet cable.</p>
<h3>How to choose the Hardware?</h3>
<p>If you are just getting into self-hosting, then I would recommend first asking yourself these 2 questions:</p>
<ol>
<li><p>Why do you want to do it? What services do you plan on self-hosting?</p>
</li>
<li><p>Who are you self-hosting for? How many users will this system support?</p>
</li>
</ol>
<p>If the answer for 1st is only couple of services and the answer to 2nd question is &lt; 5 then you don't need to necessarily invest big upfront and rather I would recommend utilizing any old PC/Laptop you may already have.</p>
<p>With that said, If you do want to buy a new hardware, I would recommend NOT buying a SBC like Raspberry Pi, simply because getting a Mini PC or an actual full size system will give you much more headroom should you choose to expand your services (which you will, speaking from experience 😅)</p>
<p>Go for at-least 8 or 16GB RAM and at-least 256GB of internal NVME SSD. While I use a 6th Gen i5-6500T, which remains an excellent budget entry point in 2026 due to its low power and cost, you might also consider <strong>8th Gen+ Intel CPUs</strong> for official Windows 11 support and better QuickSync, or <strong>10th/12th Gen</strong> for modern AV1 decoding/encoding. The key is that you can start with almost anything and scale up later.</p>
<p>You can treat the internal SSD as premium storage and use it only for the primary system and then have all your data stored on hard-disk which would be much more economical and would simply work for pretty much all the use-cases.</p>
<h2><strong>Software / OS</strong></h2>
<p>To effectively utilize my machine as a server, I installed Debian 12 (now upgraded to Debian 13) for its stability, something crucial for a system I plan on running 24x7.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/f42efce1-a20c-4dc1-adcc-e404f3beaea6.png" alt="Debian Logo" style="display:block;margin:0 auto" />

<p>As I planned on using terminal primarily to access my system, I installed Xfce4 as my desktop environment, to keep the system resource usage low and prioritize them for the services I would be running.</p>
<p>I then installed <a href="https://docs.docker.com/compose/">Docker &amp; Docker Compose</a> as I would be running all the services in containers, this would allow me to keep things clean on my system, also have all my services isolated and try out different services and restart from scratch if I mess things up.</p>
<p>I also set up <a href="https://www.debian.org/doc/manuals/debian-reference/ch05.en.html#_the_ssh_server">SSH server</a> to ensure I can access my machine via terminal and <a href="http://xrdp.org/">xrdp</a> to be able to RDP into my system when needed.</p>
<p>In another blog I'll cover in detail all the software/packages I use and how i have them set up.</p>
<h3>How to choose the OS?</h3>
<p>If you are new to Linux, look for a distro which would be much closer in terms of the UI you are familiar with. OR if nothing use Debian, it is stable, it ships updates on a regular manner and you can just run it with install-and-forget mindset.</p>
<p>Regardless of what distro you choose you can always change the UI (<a href="https://wiki.debian.org/DesktopEnvironment">desktop environment</a>) later on. I started with Xfce but then also install KDE Plasma and now I can choose between the two when I log into the system directly.</p>
<h2><strong>Network Diagram</strong></h2>
<p>This is what my current network architecture looks like.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/d01a3ec6-c4d7-42e0-95ad-82ad3f68da14.svg" alt="" style="display:block;margin:0 auto" />

<p>My server currently is connected directly to the Wifi Router over Ethernet and I tend to keep the WIFI off on it, and I have also assigned it a fixed IP in my Router settings in-case I plug it out from Ethernet and connect over WIFI.</p>
<p>The fixed IP is crucial, not just for ad-blocking, but to ensure you can always find your server via SSH or access your hosted services without the address changing.</p>
<p>You can read more about setting up ad-blocking here: <a href="https://nkaushik.in/writing/adguard-home-for-families-how-to-stop-ads-and-trackers-without-installing-extensions">https://nkaushik.in/writing/adguard-home-for-families-how-to-stop-ads-and-trackers-without-installing-extensions</a></p>
<h2><strong>Backups: Protect your Data</strong></h2>
<p>The most important rule of self-hosting is: <strong>RAID is not a backup.</strong></p>
<p>Since your family photos are likely the most precious data you'll host, a robust backup strategy is critical. I recommend the <strong>3-2-1 strategy</strong>:</p>
<ul>
<li><p><strong>3</strong> copies of your data (Original + 2 backups).</p>
</li>
<li><p><strong>2</strong> different media types (e.g., Internal SSD and External HDD).</p>
</li>
<li><p><strong>1</strong> copy offsite (Cloud storage like Backblaze B2 or AWS S3 using tools like <strong>Restic</strong> for automated, encrypted backups).</p>
</li>
</ul>
<p>We often overlook that with self-hosting, we are responsible for both reliability and security</p>
<hr />
<h2><strong>What's Next?</strong></h2>
<p>Once you set these 3 main pillars i.e., Hardware, Software &amp; Networking, you are ready to start your self-hosted journey.</p>
<p>This blog is just the beginning. In my <a href="https://nkaushik.in/writing/series/homelab/"><strong>Homelab Series</strong></a>, I'll be covering a lot more services in detail:</p>
<ul>
<li><p><strong>Media:</strong> Setting up Jellyfin or Plex for your movie collection.</p>
</li>
<li><p><strong>Photos:</strong> A deep dive into Immich for your family memories.</p>
</li>
<li><p><strong>Monitoring:</strong> Using Uptime Kuma and Ntfy for real-time status and notifications.</p>
</li>
<li><p><strong>Remote Access:</strong> Safely accessing your services from anywhere via Cloudflare Tunnels or Tailscale and setting up Zero Trust</p>
</li>
</ul>
<p>Update:</p>
<p>Checkout the other Self-Hosting posts:</p>
<ul>
<li><p>Immich Guide <a href="http://nkaushik.in/writing/replace-google-photos-self-host-immich-in-2026">http://nkaushik.in/writing/replace-google-photos-self-host-immich-in-2026</a></p>
</li>
<li><p>AdGuard Home <a href="https://nkaushik.in/writing/adguard-home-for-families-how-to-stop-ads-and-trackers-without-installing-extensions">https://nkaushik.in/writing/adguard-home-for-families-how-to-stop-ads-and-trackers-without-installing-extensions</a></p>
</li>
</ul>
<p>Now go ahead and start hosting!</p>
]]></content:encoded></item><item><title><![CDATA[AdGuard Home: Stop Ads & Trackers for Your Entire Family]]></title><description><![CDATA[How often have you encountered the scenario where you search something and suddenly everywhere you go, you start seeing ads for it. Or, when you try to read a news article and you can barely read anyt]]></description><link>https://blog.nkaushik.in/adguard-home-for-families-how-to-stop-ads-and-trackers-without-installing-extensions</link><guid isPermaLink="true">https://blog.nkaushik.in/adguard-home-for-families-how-to-stop-ads-and-trackers-without-installing-extensions</guid><category><![CDATA[Homelab]]></category><category><![CDATA[#selfhosted]]></category><category><![CDATA[SelfHosting]]></category><category><![CDATA[adguardhome]]></category><category><![CDATA[AdGuard]]></category><category><![CDATA[privacy]]></category><category><![CDATA[phishing]]></category><category><![CDATA[Security]]></category><category><![CDATA[ads]]></category><category><![CDATA[adblock]]></category><category><![CDATA[parental control ]]></category><dc:creator><![CDATA[Nishchay Kaushik]]></dc:creator><pubDate>Thu, 12 Mar 2026 13:47:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/3ec8c18d-f822-45f1-8e8e-a4ed639cfd59.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>How often have you encountered the scenario where you search something and suddenly everywhere you go, you start seeing ads for it. Or, when you try to read a news article and you can barely read anything with all the ads showing up.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/4987fef2-f5b3-438f-9b46-b8947fa0629a.jpg" alt="example of a website filled with full page ads" style="display:block;margin:0 auto" />

<p>Although you might use apps like Firefox or extensions such as Adblock, they must be installed on every device, which is often impractical.</p>
<p>I faced a similar issue because my family, like many others, isn't tech-savvy. I needed a solution that could be implemented to shield them from ads and malware without requiring any changes on their part.</p>
<h2>Enter, AdGuardHome</h2>
<p><a href="https://github.com/AdguardTeam/AdGuardHome">https://github.com/AdguardTeam/AdGuardHome</a> is a wonderful thing. I have found it extremely easy to Install, Manage and Configure.</p>
<p>In, this blog I would cover how you can also Install AdGuard.</p>
<div>
<div>🤖</div>
<div>Let's assume we're working in a Linux environment, as it simplifies the installation and setup of such services. However, the steps are generally similar across all operating systems.</div>
</div>

<hr />
<h2>Installation</h2>
<p>Head over to <a href="https://github.com/AdguardTeam/AdGuardHome?tab=readme-ov-file#automated-install-linux-and-mac">https://github.com/AdguardTeam/AdGuardHome?tab=readme-ov-file#automated-install-linux-and-mac</a> and would notice the installation of AdGuard is a simple as running a command.</p>
<p>To install with <code>curl</code> run the following command:</p>
<pre><code class="language-shell">curl -s -S -L https://raw.githubusercontent.com/AdguardTeam/AdGuardHome/master/scripts/install.sh | sh -s -- -v
</code></pre>
<p>To install with <code>wget</code> run the following command:</p>
<pre><code class="language-shell">wget --no-verbose -O - https://raw.githubusercontent.com/AdguardTeam/AdGuardHome/master/scripts/install.sh | sh -s -- -v
</code></pre>
<p>This will install the AdGuard Service which would acts as your own DNS Server.</p>
<h4>What is DNS?</h4>
<p>DNS stands for Domain Name System, In very simple terms: You see it is like the internet's phone book, every website you visit actually has something known as IP Address, eg., 66.11.184.216, which as you can imagine is very hard to remember, so you add an entry into the phone-book which says when you look for <a href="http://example.com">example.com</a> , It makes a call to 66.11.184.216.</p>
<p>DNS is essential because it allows us to use friendly names instead of numbers.</p>
<h2>Setting up AdGuard Home</h2>
<blockquote>
<p><a href="https://adguard-dns.io/kb/adguard-home/getting-started/">https://adguard-dns.io/kb/adguard-home/getting-started/</a></p>
</blockquote>
<p>After installation, a web interface should start on port 3000, allowing you to configure AdGuard. Just open the URL <a href="http://localhost:3000">http://localhost:3000</a> to set it up.</p>
<p>Step 1. Click <code>Get Started</code></p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/efb09ac6-5fa5-4243-a87f-77f4b38cb8ca.png" alt="AdGuard Home Get Started page" style="display:block;margin:0 auto" />

<p>Step 2:</p>
<p>Here you can configure on which port AdGuard will run its UI for you to configure it after installation, by default it uses port 80, I highly recommend changing it.</p>
<p>Next, you'll have the option to change the port on which your DNS server runs after installation. By default, it uses port 53, and I recommend keeping it as is.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/f904e82c-ee5a-4942-8708-bb4fa55ae986.png" alt="AdGuard Home network port configuration page" style="display:block;margin:0 auto" />

<p>Step 3:</p>
<p>Setup your admin credentials to manage AdGuard</p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/959d2d4b-3b21-4de0-9c2b-3b140d5d292b.png" alt="AdGuard Home admin credentials setup" style="display:block;margin:0 auto" />

<p>continue through the steps to finish setting up.</p>
<p>After completing all the steps, you can access the admin interface at the port you configured in step 2.</p>
<p>When you arrive at the admin page, it may appear empty because you still need to configure your devices to use AdGuard.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/788668fe-820a-4e2d-bf0e-96baf916ea8b.png" alt="AdGuard Home DNS query dashboard overview" style="display:block;margin:0 auto" />

<p>You can add the filters by going to <code>Filters &gt; DNS blocklists</code> from the top nav-bar.</p>
<p>Here's a list of all the block-lists I have enabled.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/b9e31f20-770b-4df3-b1fc-809a33b7bc02.png" alt="AdGuard Home DNS blocklist settings" style="display:block;margin:0 auto" />

<h2>Using AdGuard with your devices</h2>
<p>The most optimal way to install AdGuard is to configure it at the Router level.</p>
<p>Head over to the <code>Setup Guide</code> page on your AdGuard admin interface and you can find the details and steps to set it up on different devices.</p>
<p>Before setting it up on your router, I recommend configuring it only on your system first. This ensures you won't lose internet access if something goes wrong, and make adjustments as needed..</p>
<p>Example for windows systems, you can follow these steps:</p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/422bc309-52b6-4c93-b192-3b789bfd1137.png" alt="Windows DNS settings for AdGuard Home" style="display:block;margin:0 auto" />

<p>Once, you have verified it on a single device, and you can see traffic flowing into your AdGuard instance i.e,. you see DNS Queries showing up on your AdGuard interface, You can go ahead and configure it at the router level.</p>
<p>In your router's admin panel, locate the DHCP settings and enter the IP address where your AdGuard is running in the DNS field. You can find this IP on the Setup Guide page; it will resemble something like 192.168.xx.xxx.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/0e9b58ac-20ca-4aa1-9700-f72393fc36f4.jpg" alt="TP-Link Router DHCP DNS settings example" style="display:block;margin:0 auto" />

<p>Once you've configured it at the router level, all devices connected to your home network will have AdGuard enabled.</p>
<h2>Bonus</h2>
<p>In addition to blocking ads and trackers, AdGuard offers the added benefit of enabling content blocking across your entire home network.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/e7c66db1-d427-4d1e-88ce-cd18f8cd83af.png" alt="Adguard Home block services eg., youtube, tiktok" style="display:block;margin:0 auto" />

<p>It offers an easy-to-use method to block services you want to restrict, such as adult content and gambling.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/c391abda-b4b7-48fc-b9f5-10ba2445499e.png" alt="Adguard home schedule service blocking" style="display:block;margin:0 auto" />

<p>You can take it a step further by blocking services for specific time periods. For example, you can restrict access to YouTube for your kids outside designated hours, ensuring effective parental control.</p>
<h2>Conclusion</h2>
<p>AdGuard Home gives you a simple, network-wide way to reclaim privacy, reduce intrusive ads, and add a layer of protection against malicious domains — all without installing browser extensions on every device. Once installed on a small always-on machine (Raspberry Pi, home server, or VPS), it works silently for every device that uses your network DNS, making it ideal for non-technical family members.</p>
<p>To finish your setup: point devices or your router to AdGuard Home as the DNS, enable the appropriate blocklists and filters, whitelist sites you trust, and secure the web UI with a strong password (or VPN). Regularly update filter lists and back up your configuration. Monitor logs briefly after deployment to catch any false positives and adjust rules as needed.</p>
<p>AdGuard Home is low-maintenance but powerful — a practical step toward a cleaner, safer browsing experience for everyone on your network.</p>
]]></content:encoded></item><item><title><![CDATA[Self‑Hosting on a Budget: What I Run on My Mini PC]]></title><description><![CDATA[Last year, I decided to buy a second-hand mini PC, and I couldn't be happier with that choice!
Why? Because it opened the door to the world of self-hosting. Initially, my goal was simply to organize m]]></description><link>https://blog.nkaushik.in/self-hosting-on-a-budget-what-i-run-on-my-mini-pc</link><guid isPermaLink="true">https://blog.nkaushik.in/self-hosting-on-a-budget-what-i-run-on-my-mini-pc</guid><category><![CDATA[Homelab]]></category><category><![CDATA[homelabbing]]></category><category><![CDATA[self-hosted]]></category><category><![CDATA[#selfhosted]]></category><category><![CDATA[Google Photos]]></category><category><![CDATA[Immich]]></category><category><![CDATA[adguardhome]]></category><category><![CDATA[services]]></category><category><![CDATA[Mini PC]]></category><dc:creator><![CDATA[Nishchay Kaushik]]></dc:creator><pubDate>Thu, 12 Mar 2026 13:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/fb3c3e02-38bd-4c64-824a-23bae12b5172.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last year, I decided to buy a second-hand mini PC, and I couldn't be happier with that choice!</p>
<p>Why? Because it opened the door to the world of self-hosting. Initially, my goal was simply to organize my photo library, which had been tucked away on my hard disk drive. However, I ended up running over 35 services in Docker containers!</p>
<p>[Here's a preview of all the services I currently run on my server.]</p>
<img src="https://cdn.hashnode.com/uploads/covers/6984dc9e559eb2d777b15a66/a98f263a-41a6-4f79-a93c-e58a0fddf68b.jpg" alt="Image showing multiple charts with text" style="display:block;margin:0 auto" />

<p>While I run a plethora of services, there are a couple of services that are top-tier for me, namely:</p>
<ul>
<li><p><strong>Adguard</strong>: For Adblocking and custom DNS</p>
</li>
<li><p><strong>Immich</strong>: The Google Photos alternative</p>
</li>
<li><p><strong>Jellyfin:</strong> For all the Linux ISOs 🙂</p>
</li>
<li><p><strong>Outline Wiki</strong>: For internal notes, documentation and wikis</p>
</li>
</ul>
<p>These are some of the services I use frequently, while others quietly work behind the scenes, acting as the unsung heroes that keep everything running smoothly.</p>
<h2>To Begin</h2>
<p>In this multi-part series, I will cover how I configured each of these services, along with the best practices I used to make them publicly or remotely accessible and secure.</p>
<p>You can find all the details in the series:</p>
<p><a href="https://nkaushik.in/writing/series/homelab">https://nkaushik.in/writing/series/homelab</a></p>
]]></content:encoded></item><item><title><![CDATA[Running 24/7 Local AI on an Old Android without Overheating]]></title><description><![CDATA[Last week I wrote about repurposing an old Android phone to run local AI models.
In this follow-up I address the biggest obstacle to running the device 24/7: overheating and how I transformed it into ]]></description><link>https://blog.nkaushik.in/running-24-7-local-ai-on-an-old-android-without-overheating</link><guid isPermaLink="true">https://blog.nkaushik.in/running-24-7-local-ai-on-an-old-android-without-overheating</guid><category><![CDATA[local ai]]></category><category><![CDATA[Local AI models]]></category><category><![CDATA[llm]]></category><category><![CDATA[Android]]></category><category><![CDATA[#qwen]]></category><category><![CDATA[personal ai]]></category><category><![CDATA[llama.cpp]]></category><category><![CDATA[privacy]]></category><category><![CDATA[on-device ai]]></category><category><![CDATA[ Edge AI]]></category><category><![CDATA[Tutorial]]></category><category><![CDATA[old android]]></category><dc:creator><![CDATA[Nishchay Kaushik]]></dc:creator><pubDate>Sun, 22 Feb 2026 05:00:00 GMT</pubDate><enclosure url="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6984dc9e559eb2d777b15a66/a73280b7-58d0-4c6e-81d0-d010f3b5015d.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last week I wrote about repurposing an old Android phone to run local AI models.</p>
<p>In this follow-up I address the biggest obstacle to running the device 24/7: overheating and how I transformed it into a fully functional personal local AI solution.</p>
<div>
<div>💡</div>
<div>Read the previous blog here: <a target="_self" rel="noopener noreferrer nofollow" class="text-primary underline underline-offset-2 hover:text-primary/80 cursor-pointer" href="https://blog.nkaushik.in/how-to-run-llm-models-on-old-android-devices-locally" style="pointer-events:none">https://blog.nkaushik.in/how-to-run-llm-models-on-old-android-devices-locally</a></div>
</div>

<h2>Fixing Heating Problem</h2>
<p>One of the main challenges I faced was running it 24/7 without overheating the device.</p>
<p>Fortunately, during a brainstorming session with Gemini, I learned about Advanced Charging Control (ACC), a Magisk Module. <a href="https://github.com/VR-25/acc">https://github.com/VR-25/acc</a></p>
<p>Having already rooted the device for tinkering purposes, I decided to give it a try. Some of the latest phones offer the option to skip charging and run directly on AC power, but since my device was older and lacked this feature, the Magisk module gave me hope.</p>
<p>The process was pretty straightforward to install the module, and once done, I had to reboot the device.</p>
<p>After rebooting, my device could now automatically turn charging on and off at specific battery levels. I kept the default settings, which meant it would stop charging at 75% and start again at 70%. This module prevented constant charging, helping reduce the heat generated from being plugged in continuously.</p>
<h2>Llama.cpp Server</h2>
<p>Another optimization I implemented involved running the models more efficiently. Previously, I relied on a third-party app to connect to the llama.cpp server on the device, but these apps were often unreliable and glitchy. I discovered that running the llama-server also launches a web UI, accessible at <code>http://localhost:8080</code>, unless explicitly disabled. This allowed me to bypass the faulty third-party apps and directly access the models through the URL.</p>
<p>I faced another challenge: how to switch models according to my needs. I had heard of llama-swap, but since I was running these commands on my Android device, compiling another binary was something I wanted to avoid. Then I found a blog on Hugging Face about model management in llama.cpp, which mentioned that a recent update to llama.cpp had introduced a solution for this issue.</p>
<p>You can now run <code>llama-server --models-dir ./models</code>, allowing users to choose the desired model. In the web UI, you can easily select the model from a drop-down menu.</p>
<p>I simply created an alias that worked for my case and ended up with</p>
<pre><code class="language-shell">llama-server --models-dir ./storage/downloads/models --models-max 1 -c 8192 --sleep-idle-seconds 120
</code></pre>
<p>Now, when I open the server running on port <code>127.0.0.1:8080</code>, I see something like this:</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6984dc9e559eb2d777b15a66/55dea351-f31d-4347-b88d-1ead138ea2f0.png" alt="llama cpp web interface showing the chat input box with the dropdown to select models open" style="display:block;margin:0 auto" />

<p>I conducted a simple test on several models by giving each one the identical prompt. After submitting the prompt, I recorded the tokens per second (t/s) performance for each model.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6984dc9e559eb2d777b15a66/076d1a5d-c880-4946-9de4-7b199f1679af.png" alt="qwen3 0.6b tokens per second" style="display:block;margin:0 auto" />

<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6984dc9e559eb2d777b15a66/6d712572-cbba-457f-b758-1b444bf1f085.png" alt="qwen3 1.7b tokens per second" style="display:block;margin:0 auto" />

<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6984dc9e559eb2d777b15a66/1af45af8-3201-4bf5-a078-0ea6ffbd850b.png" alt="lfm2.5 1.2b tokens per second" style="display:block;margin:0 auto" />

<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6984dc9e559eb2d777b15a66/3361d491-c2ae-46fa-8ec5-03146020672c.png" alt="lfm2.5 1.2b thinking tokens per second" style="display:block;margin:0 auto" />

<p>As expected, <mark class="bg-yellow-200 dark:bg-yellow-500/30">the 0.6B parameter model was the fastest</mark>, while the 1.7B model was the slowest. Both Qwen3 models required a considerable amount of time. The Qwen3 4B model was particularly slow, achieving only about 2 tokens per second, making it impractical for most uses.</p>
<hr />
<h2>Demo</h2>
<p>Here's an example of the model in action with a sample prompt I provided:</p>
<p><a href="https://youtu.be/k0-X5YokIJ4">https://youtu.be/k0-X5YokIJ4</a></p>
<iframe width="1280" height="720" src="https://www.youtube.com/embed/k0-X5YokIJ4?si=-0NaRYq8wYCNPUUx" frameborder="0" allowfullscreen></iframe>

<h2>Closing thoughts</h2>
<p>In short, using the Advanced Charging Control Magisk module let me run local AI workloads on an older Android continuously without the device constantly charging and overheating. By having the phone stop charging around 75% and resume at 70%, the battery no longer acted as a heat sink while plugged in, which noticeably reduced sustained temperatures and made 24/7 operation practical on hardware that otherwise would have run too hot.</p>
<p>Key takeaways:</p>
<ul>
<li><p>Rooting and ACC worked well for my device but carries risks (voided warranty, potential bricking); proceed only if you’re comfortable with those trade-offs.</p>
</li>
<li><p>Use conservative charge thresholds and monitor temperatures after changes; what’s safe depends on your phone’s age and thermal design.</p>
</li>
<li><p>Combine software measures (charging control, throttling, disabling background apps) with hardware measures (passive heatsinks, improved airflow, occasional breaks) for best results.</p>
</li>
<li><p>Back up your data and test settings incrementally; small adjustments help find a stable balance between uptime and longevity.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[How to Run LLM Models on Old Android Devices Locally]]></title><description><![CDATA[This post covers a much more technical and involving way to run local LLMs on android via a terminal setup.
If you want to try an easy-to-use  , less technical way, I have it covered in this latest bl]]></description><link>https://blog.nkaushik.in/how-to-run-llm-models-on-old-android-devices-locally</link><guid isPermaLink="true">https://blog.nkaushik.in/how-to-run-llm-models-on-old-android-devices-locally</guid><category><![CDATA[gemma-4]]></category><category><![CDATA[gemma]]></category><category><![CDATA[gemma4]]></category><category><![CDATA[#selfhosted]]></category><category><![CDATA[llm]]></category><category><![CDATA[local ai]]></category><category><![CDATA[Local LLM]]></category><category><![CDATA[privacy]]></category><category><![CDATA[termux]]></category><category><![CDATA[llamacpp]]></category><category><![CDATA[llama-server]]></category><category><![CDATA[Android]]></category><category><![CDATA[iOS]]></category><category><![CDATA[AI]]></category><category><![CDATA[AI]]></category><category><![CDATA[AnythingLLM]]></category><category><![CDATA[pocketpal]]></category><category><![CDATA[google edge]]></category><category><![CDATA[ Edge AI]]></category><dc:creator><![CDATA[Nishchay Kaushik]]></dc:creator><pubDate>Fri, 13 Feb 2026 20:16:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/unt3066GV-E/upload/6e4c06b766193b8c17084bbd3bfd5598.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>This post covers a much more technical and involving way to run local LLMs on android via a terminal setup.</p>
<p>If you want to try an <mark class="bg-yellow-200 dark:bg-yellow-500/30">easy-to-use </mark> , less technical way, I have it covered in this latest blog post <a href="https://nkaushik.in/writing/top-4-ways-to-run-llm-locally-on-android-and-ios/">https://nkaushik.in/writing/top-4-ways-to-run-llm-locally-on-android-and-ios/</a></p>
<p>In the above post I go through 4 ways to run on LLMs on both Android and iOS devices.</p>
</blockquote>
<hr />
<p>When LLM models were first launched, we had to rely on the cloud versions, like ChatGPT or Gemini. However, things are changing for the better. We are now seeing a wave of new AI models being released every week, and most of them can run locally with good performance. This allows us to perform AI inference on edge or even mobile devices.</p>
<p>I have been running these models on my Windows machine using Ollama for a while, and even on my latest high-end Android phone with apps like <a href="https://play.google.com/store/apps/details?id=com.google.ai.edge.gallery&amp;hl=en_IN">Google AI Edge Gallery</a>, <a href="https://play.google.com/store/apps/details?id=com.pocketpalai&amp;hl=en_IN">PocketPal</a>, and <a href="https://play.google.com/store/apps/details?id=com.anythingllm&amp;hl=en_IN">AnythingLLM</a>. Everything has been running smoothly. I've used these models for tasks like describing images or helping me improve my writing. The speed of inference is quite fast, thanks to the latest and fastest mobile SoCs they are running on.</p>
<p>BUT…</p>
<p>I wanted to try something different. I had an old Android phone (OnePlus 3T) lying around, and I always wondered if I could find a use for it. So, this weekend, I did a quick proof of concept.</p>
<p>I tested whether I could run these new AI models locally on this old Android device, which doesn't have the most powerful hardware. Fortunately, tools like <a href="https://unsloth.ai/">Unsloth</a>, GGUF, and <a href="https://github.com/ggml-org/llama.cpp">Llama.cpp</a> are available.</p>
<p>In cases where we have limited RAM, the GGUF format of models lets us run them with much lower VRAM requirements while maintaining nearly the same accuracy.</p>
<p>To get started, I first needed to figure out how to run these models. Llama.cpp offers a CLI or server method to interact with your model. However, they provide pre-built binaries for Windows and Linux, not for Android, so we need to compile them for Android.</p>
<h3>🛠️Installing Termux</h3>
<p>To compile llama.cpp for Android, I needed a shell, and Termux was the solution. The latest version on the Play Store wasn't compatible with my device, but I managed to download and install the APK from F-Droid. With Termux running, I was ready to proceed to the next step of compiling llama.cpp..</p>
<h3>👷🏼Building llama.cpp</h3>
<p>This was quite straightforward, all I had to do was follow the steps:</p>
<p><a href="https://github.com/ggml-org/llama.cpp/blob/master/docs/android.md#build-cli-on-android-using-termux">https://github.com/ggml-org/llama.cpp/blob/master/docs/android.md#build-cli-on-android-using-termux</a></p>
<div>
<div>🤔</div>
<div>While I was building llama.cpp, I ran into a permission issue, which happens if we clone the repository into an external storage, instead use the home directory of Termux and build it there itself.</div>
</div>

<h3>🏃🏻Running the model</h3>
<p>Once llama.cpp was built, the binaries were placed in the <code>bin</code> folder under the <code>build</code> directory. Then, I could start the server using</p>
<pre><code class="language-bash">./build/bin/llama-server -m model-path -c 2048 -n 4096 -—host 0.0.0.0 --port 8080
</code></pre>
<p>Since I was going to connect to this server from a different device, I had to set the host to 0.0.0.0.</p>
<p>I tried using 4 different models with <code>Q4_K_M</code> quantization:</p>
<ul>
<li><p><a href="https://huggingface.co/unsloth/LFM2.5-1.2B-Instruct-GGUF">https://huggingface.co/unsloth/LFM2.5-1.2B-Instruct-GGUF</a></p>
</li>
<li><p><a href="https://huggingface.co/LiquidAI/LFM2.5-1.2B-Thinking">https://huggingface.co/LiquidAI/LFM2.5-1.2B-Thinking</a></p>
</li>
<li><p><a href="https://huggingface.co/unsloth/Qwen3-0.6B-GGUF">https://huggingface.co/unsloth/Qwen3-0.6B-GGUF</a></p>
</li>
<li><p><a href="https://huggingface.co/unsloth/gemma-3-1b-it-GGUF">https://huggingface.co/unsloth/gemma-3-1b-it-GGUF</a></p>
</li>
</ul>
<p>I also had to adjust the <code>-c</code> and <code>-n</code> option values based on the available RAM.</p>
<p>Once the server was running, I needed a client to connect to it.</p>
<p>First, I configured the AnythingLLM app. Llama.cpp starts a server that is compatible with the Generic OpenAI spec. However, I often encountered issues with the app when trying to use the model, and it simply wouldn't work.</p>
<p>Then, I switched to <a href="https://github.com/open-webui/open-webui">Open-WebUI</a> as a client, and it was an immediate success. As soon as I entered the details, it detected the model, and starting a chat with the model worked smoothly.</p>
<p>I tried running each model one by one to understand their performance. As expected, Qwen 3 0.6B was the fastest. The performance of these models was just okay. In some cases, I could get 10-18 tokens per second, while the Thinking models and larger models only produced 4-5 tokens per second. So, while they worked, the results weren't very impressive.</p>
<p>A key point to remember is that the <a href="https://www.gsmarena.com/oneplus_3t-8416.php">OnePlus 3T</a> was launched 10 years ago, so I wasn't expecting groundbreaking performance. For comparison, when I run local models on the Snapdragon 8 Elite chipset, I get over 30 tokens per second on the LFM2.5 Thinking model with Q8 quantization, along with almost instant Time to First Token(TTFT). This performance is better and more consistent with other models I ran on the same chipset.</p>
<p>Another downside of running these models on a mobile device is the heat. Extended use will always cause your device to start heating up.</p>
<p>In conclusion, running LLM models on older Android devices is a feasible endeavor, albeit with some limitations. By leveraging tools like Termux and llama.cpp, it's possible to compile and execute AI models locally, even on hardware that is not cutting-edge. While performance may not match that of newer devices, and issues such as heat generation and slower token processing rates are present, this approach offers a valuable way to repurpose older technology. It demonstrates the potential for AI applications to be more accessible and versatile, allowing users to explore AI capabilities without relying solely on high-end devices or cloud-based solutions.</p>
<div>
<div>💡</div>
<div>Part 2: <a target="_blank" rel="noopener noreferrer nofollow" class="text-primary underline underline-offset-2 hover:text-primary/80 cursor-pointer" href="https://nkaushik.in/writing/running-24-7-local-ai-on-an-old-android-without-overheating/" style="pointer-events:none">https://nkaushik.in/writing/running-24-7-local-ai-on-an-old-android-without-overheating/</a></div>
</div>]]></content:encoded></item><item><title><![CDATA[Hello World!]]></title><description><![CDATA[Hi there👋, I’m Nishchay, an Engineer, I am currently exploring the world of AI and Self Hosting. This is my space to share my thoughts with you.
I have over ten years of experience in the software industry, where I have worked with various organizat...]]></description><link>https://blog.nkaushik.in/hello-world</link><guid isPermaLink="true">https://blog.nkaushik.in/hello-world</guid><category><![CDATA[Hello World]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software development]]></category><category><![CDATA[generative ai]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[TypeScript]]></category><dc:creator><![CDATA[Nishchay Kaushik]]></dc:creator><pubDate>Thu, 05 Feb 2026 18:56:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/LSuIc8Riv9I/upload/728620da67a33014e5c8698f89abef30.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi there👋, I’m Nishchay, an Engineer, I am currently exploring the world of AI and Self Hosting. This is my space to share my thoughts with you.</p>
<p>I have over ten years of experience in the software industry, where I have worked with various organizations in both B2C and B2B sectors.</p>
<p>Over the years, I have worked with many technologies and tools that fascinated me. I began my journey with PHP and have since built Native Android Apps and React Native Apps. I've also had the chance to work with Ruby on Rails, Golang, and Python. Additionally, I have hands-on experience in maintaining services and infrastructure on both Google Cloud (GCP) and AWS, as well as with Kubernetes (k8s).</p>
<p>I currently work in the JavaScript/TypeScript ecosystem, specializing in React.JS &amp; Node.js.</p>
<p>I possess extensive expertise in the frontend domain, which I recently leveraged to build a comprehensive Performance Testing platform from the ground up. This project encompassed not only the frontend but also the backend and the data engineering aspects. I undertook this ambitious endeavor with a very small team, yet we successfully implemented the entire platform. It is now widely used throughout the organization and serves as a crucial guardrail for performance-related checks across the entire company. The platform's integration has significantly enhanced our ability to monitor and optimize performance, ensuring that all our systems operate efficiently and effectively. This achievement has not only streamlined our processes but also reinforced the importance of performance testing within our organizational framework.</p>
<p>I enjoy building things and trying out new ideas. I've been working to keep up with the rapid advancements in the Generative AI space.</p>
]]></content:encoded></item><item><title><![CDATA[Exploring the Power of Web Browsers in 2020]]></title><description><![CDATA[The web is a very powerful platform it grants you the ability to share something with everyone irrespective of what device they are using, be it an Android or iOS phone or a Windows, Linux, or Mac laptop.
With time the things, a browser could do have...]]></description><link>https://blog.nkaushik.in/exploring-the-power-of-web-browsers-in-2020</link><guid isPermaLink="true">https://blog.nkaushik.in/exploring-the-power-of-web-browsers-in-2020</guid><category><![CDATA[Browsers]]></category><category><![CDATA[WebRTC]]></category><category><![CDATA[streaming]]></category><category><![CDATA[video]]></category><category><![CDATA[zoom]]></category><category><![CDATA[Google Meet]]></category><dc:creator><![CDATA[Nishchay Kaushik]]></dc:creator><pubDate>Mon, 28 Dec 2020 06:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1771055569582/a4c71cbc-a90b-4132-81db-e03313eda3e0.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The web is a very powerful platform it grants you the ability to share something with everyone irrespective of what device they are using, be it an Android or iOS phone or a Windows, Linux, or Mac laptop.</p>
<p>With time the things, a browser could do have also increased greatly (except IE11, switch to Firefox already). A webpage is no longer about some text and images, you can add a lot more power-hungry features to your webpage. You can create Games, Build Services like Google Docs, Browser Stack, and a lot more all running on a browser.</p>
<p>This post is going to talk about one of the browser’s features I had a chance to work with and the tech I explored for integrating it.</p>
<h3 id="heading-problem-statement">Problem Statement</h3>
<p>The problem statement I had in front of me was to implement a screen-share feature where-in when a User is sharing the screen it would be possible for others to view it over the internet as a real-time video.</p>
<p>The first step was to break down this problem into smaller problems and build a prototype around it.<br />As I saw it, the problem had two pieces, first figuring out how to capture the screen, second, how to send it across to other users as a live video feed.<br />I’ll start with the Screen-share bit first.</p>
<h3 id="heading-screen-sharerecord">Screen-share/Record</h3>
<p>I knew websites already do it (you would’ve seen it in action on Zoom or Google Meet) but wasn’t aware How easy it is to do it on our own.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> promise = navigator.mediaDevices.getDisplayMedia(constraints);
</code></pre>
<p>The above line is all it takes to get started with screen-sharing. It is supported by all the latest browsers and you can read more on its support at <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getDisplayMedia">https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getDisplayMedia</a></p>
<p>When you run the above code snippet you will get a <code>Promise&lt;MediaStream&gt;</code>, now it is up-to us to decide how we want to use this.</p>
<p>We can either share it with other people using WebRTC (more on it later) or even record it and download the video.</p>
<p>Example snippet to download the recorded video (taken from <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/MediaStream_Recording_API">https://developer.mozilla.org/en-US/docs/Web/API/MediaStream_Recording_API</a>)</p>
<pre><code class="lang-javascript">promise.then(<span class="hljs-function"><span class="hljs-params">stream</span> =&gt;</span> {
    <span class="hljs-keyword">var</span> recordedChunks = [];

    <span class="hljs-built_in">console</span>.log(stream);

    <span class="hljs-keyword">var</span> options = { <span class="hljs-attr">mimeType</span>: <span class="hljs-string">"video/webm; codecs=vp8"</span> };
    mediaRecorder = <span class="hljs-keyword">new</span> MediaRecorder(stream, options);

    mediaRecorder.ondataavailable = handleDataAvailable;
    mediaRecorder.start();

    <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">handleDataAvailable</span>(<span class="hljs-params">event</span>) </span>{
        <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"data-available"</span>);
        <span class="hljs-keyword">if</span> (event.data.size &gt; <span class="hljs-number">0</span>) {
            recordedChunks.push(event.data);
            <span class="hljs-built_in">console</span>.log(recordedChunks);
            download();
        } <span class="hljs-keyword">else</span> {
        <span class="hljs-comment">// ...</span>
        }
    }

    <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">download</span>(<span class="hljs-params"></span>) </span>{
        <span class="hljs-keyword">var</span> blob = <span class="hljs-keyword">new</span> Blob(recordedChunks, {
            <span class="hljs-attr">type</span>: <span class="hljs-string">"video/webm"</span>
        });
        <span class="hljs-keyword">var</span> url = URL.createObjectURL(blob);

        <span class="hljs-keyword">var</span> a = <span class="hljs-built_in">document</span>.createElement(<span class="hljs-string">"a"</span>);

        <span class="hljs-built_in">document</span>.body.appendChild(a);
        a.style = <span class="hljs-string">"display: none"</span>;
        a.href = url;
        a.download = <span class="hljs-string">"test.webm"</span>;
        a.click();
        <span class="hljs-built_in">window</span>.URL.revokeObjectURL(url);
    }

    <span class="hljs-comment">// demo: to download after 9sec</span>
    <span class="hljs-built_in">setTimeout</span>(<span class="hljs-function"><span class="hljs-params">event</span> =&gt;</span> {
        <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"stopping"</span>);
            mediaRecorder.stop();
        }, <span class="hljs-number">9000</span>);
    });
}
</code></pre>
<p>One thing to note about the above <code>getDisplayMedia</code> API is it behaves differently on browsers. For example, Chrome lets you share between a single tab/window/desktop, Firefox shares the whole window and Safari shares the entire desktop.</p>
<p>A limitation this API has is, it won’t let you share only a part of your webpage.</p>
<p>Now that we have solved half of the problem, let’s get to the second part.</p>
<h3 id="heading-webrtc">WebRTC</h3>
<p>If you haven’t heard about it, don’t worry, I will add resources for you to read. It is an interesting topic so you should give it a try.</p>
<blockquote>
<p><strong>WebRTC</strong> (Web Real-Time Communication) is a technology that enables Web applications and sites to capture and optionally stream audio and/or video media, as well as to exchange arbitrary data between browsers without requiring an intermediary.</p>
</blockquote>
<p>The above definition is taken from MDN docs which captures the gist of what WebRTC is (<a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API">https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API</a>).</p>
<p>WebRTC is a very detailed topic and it won’t be easy to cover all that it provides in this article. I’d highly recommend going through this video on YouTube.</p>
<p><a target="_blank" href="https://www.youtube.com/watch?v=p2HzZkd2A40">https://www.youtube.com/watch?v=p2HzZkd2A4</a>0</p>
<p>Based on the above video, we derive two important steps, we need to acquire a video/audio stream and then use WebRTC to communicate it with peers.</p>
<p>This video stream that we share can either be a stream from your webcam or any MediaStream from other sources.</p>
<p>If you recall, the screen-share stream we acquired is also a MediaStream which can be used with WebRTC.</p>
<p>So once we have set-up our WebRTC connection we can simply publish this Screen-share stream on it and make it available to peers in real-time.</p>
<p>Implementing WebRTC from scratch using the basic API can be a tedious task, there are various libraries and frameworks available for use. These libraries hides away the software complexity, but to ensure WebRTC works for 100% of the users you would also need to setup a server of your own.</p>
<p>PeerJS (<a target="_blank" href="https://peerjs.com/">https://peerjs.com</a>) is one such example, which makes it easier for you to get started.</p>
<p>Various companies provide you with the infrastructure and SDK needed to get started with real-time video, for example, Twilio. This takes away both the software and hardware complexity and makes it easier for you to implement and support a wide variety of users and devices.</p>
<p>In my case, we were already utilizing Twilio for the video call feature and it also let us publish more MediaStream if needed for a user.</p>
<p>This made sharing screen-share with other users a breeze, as all it required was to acquire the screen-share stream and then publish it on the connection for a user.</p>
<p>Twilio also has a sample app that you can refer to for understanding how to do the above.</p>
<p><a target="_blank" href="https://github.com/twilio/twilio-video-app-react">https://github.com/twilio/twilio-video-app-react</a></p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>This was just one example of what can be achieved using these 2 pieces of technologies: WebRTC &amp; getDisplayMedia.</p>
<p>The possibilities are limitless.</p>
<p>For example, You can make a piece of software which records your webcam as well as your screen to make a coding tutorial with few lines of code right from your browser.</p>
<p>A small experiment I did was to find a way to only share a part of webpage. For this I wrapped the area which I wanted to share with a distinctive color border and on other users before showing the video stream I simply used image processing to crop the pieces outside of the border. This isn’t a very great solution and is extremely memory intensive to do it in JavaScript so I won’t recommend it.</p>
<blockquote>
<p>Special shout-out to <a target="_blank" href="https://medium.com/u/372f5561ad9b">Punit Gupta</a> for helping me with this article.</p>
<p>Thanks for reading.<br />Do share your thoughts and suggestions.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Building Your Own GitHub-Like Platform Using Golang]]></title><description><![CDATA[At work I primarily use JavaScript and sometimes get my hands dirty with Ruby On Rails. But I am really not a fan of RoR and wanted to learn something new.
I already have had some hands-on experience with Python, Java, PHP so this new thing had to be...]]></description><link>https://blog.nkaushik.in/building-your-own-github-like-platform-using-golang</link><guid isPermaLink="true">https://blog.nkaushik.in/building-your-own-github-like-platform-using-golang</guid><category><![CDATA[golang]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[Git]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[gin]]></category><dc:creator><![CDATA[Nishchay Kaushik]]></dc:creator><pubDate>Sat, 26 Dec 2020 06:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1771055578740/d9a4be66-5934-4882-abbd-d439a53103ea.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>At work I primarily use JavaScript and sometimes get my hands dirty with Ruby On Rails. But I am really not a fan of RoR and wanted to learn something new.</p>
<p>I already have had some hands-on experience with Python, Java, PHP so this new thing had to be neither of these. Rust &amp; Go are the 2 new buzz-words I kept hearing when it comes to Programming languages. I decided to explore these 2 languages and I was fascinated by Golang and after watching various videos of Why and How Golang was built I decided to start my journey with it.</p>
<p>As with any new language, I started with the “hello world” program. But to fully enjoy the language I needed to make something bit bigger.</p>
<p>I started exploring for ideas and then one day came up with a thought of how does GitHub works. We know it relies on git but what does it take to make something similar of my own and in-process get my hands dirty with Go.</p>
<p>This is the story of various topics I explored, blog posts &amp; docs I read to learn something and share it.</p>
<p><strong>Disclaimer</strong>: I am in no way expert on the below technologies, this post highlights the various things I learned while building the following project.</p>
<h3 id="heading-goal"><strong>Goal</strong></h3>
<p>The aim of this project was to setup a GitHub like website where I could upload my code, using HTTP for git operations on remote repository. (This post won’t cover what it takes to support SSH based git remote operations, and would implement the most basic functionality.)</p>
<p>I wanted to explore Golang language and I would be using the code snippets or examples written in Golang but the core logic/idea would remain the same irrespective of any language.</p>
<h3 id="heading-the-basics"><strong>The Basics</strong></h3>
<p>You can follow any guide to setup a basic HTTP server to handle various routes in your preferred web framework.</p>
<p>For my project I used <code>gin</code> (<a target="_blank" href="http://github.com/gin-gonic/gin">http://github.com/gin-gonic/gin</a>) to setup some basic routes.</p>
<p>To make things simpler I wrote my routes in such a way that all the git operations on repository happens on a URL with this format. <code>[http://domain.com/git/repo-name](http://domain.com/git/repo-name)</code><br />This means the remote URL for the local repository would look like above and you would use this URL to clone a repository and for other git operations as well.</p>
<h3 id="heading-adding-functionality"><strong>Adding Functionality</strong></h3>
<p><strong>Creating a Repository</strong></p>
<pre><code class="lang-http"><span class="hljs-attribute">POST /repo
Content-Type</span>: application/json

<span class="json">{
    <span class="hljs-attr">"name"</span>: <span class="hljs-string">"repo-name"</span>
}</span>
</code></pre>
<p>We know that to start a git repository we do <code>git init</code> . Running this command creates a <code>.git</code> folder with all the required files.</p>
<p>What happens when you need to create a repository on your server?</p>
<p>We utilize the same command but we pass an additional option <code>--bare</code> .<br />Passing this option creates a git repository without a working tree, this created repository is different as it will prevent any changes to occur on the remote repository directly and one can’t run the usual git commit command directly in this directory.</p>
<p>Try running the <code>git init --bare</code> command on your system and see the file structure, you will notice that it is similar to what <code>.git</code> directory looks like but instead of all these files/folder being present in <code>.git</code> folder it is present in the root of your folder where you ran the command.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1771055572075/fd2984a2-cc51-4b31-8cf0-bc6f8e06b658.png" alt /></p>
<p><code>git init --bare</code> file structure</p>
<p>You can look more into the <code>--bare</code> option to understand it even better.</p>
<p>Ref: <a target="_blank" href="https://www.atlassian.com/git/tutorials/setting-up-a-repository/git-init">https://www.atlassian.com/git/tutorials/setting-up-a-repository/git-init</a></p>
<p>Now that we know how to setup a repository on server we can create an endpoint which will let us create new repository.</p>
<p>Code for such an endpoint could look like below. Here I have stripped out the various check I implemented before actually creating a repository.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1771055573873/306eeec4-3ea1-4b0e-90c8-5feb11ece813.png" alt /></p>
<h3 id="heading-create-repository-http-route">Create Repository HTTP Route</h3>
<p>I created a small <code>utils</code> package which I used to handle all git specific operations. Code for the <code>CreateNewRepo</code> function used in above snippet is below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1771055575473/3e1facf9-4aba-44b4-9e66-3a186125c2da.png" alt class="image--center mx-auto" /></p>
<p>We now have a endpoint set-up which when hit will create a new repository for us.</p>
<p>Now let’s implement the main git operations.</p>
<h3 id="heading-implementing-git-operations-support"><strong>Implementing Git Operations support</strong></h3>
<p>When we do <code>git clone</code> <code>git push</code> <code>git pull</code> on our terminal if the remote repository URL is <code>HTTP</code> based git internally uses HTTP as the mode to perform the required operations for these commands to work.</p>
<p>This is a key piece of information as all we need to do now is make sure our server support all the endpoints <code>git</code> will use and also handle the response and request for these endpoints.</p>
<p>Luckily there are already tools/libraries which does this and we won’t need to implement them from scratch.</p>
<p>For my project in Go I came across this project <a target="_blank" href="https://github.com/asim/git-http-backend/blob/master/server/server.go">https://github.com/asim/git-http-backend/blob/master/server/server.go</a></p>
<p>The author of this project had already implemented all the necessary endpoints and the core functionality for each of them using go’s default HTTP server as the base.</p>
<p>Since I was using <code>Gin</code> I had to tweak the code to make it work specific to <code>Gin</code> framework, which was comparatively easy as all the <code>Handler</code> function needed was <code>Request</code> &amp; <code>Response</code> .</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1771055576963/71ba6e41-ddce-4004-a726-673ff41e87e8.png" alt /></p>
<p>That’s it.</p>
<p>We now have the required endpoints created and we can now take our code for a spin.</p>
<h3 id="heading-lets-execute"><strong>Let’s Execute</strong></h3>
<p>Let’s say if your server was running on port <code>8000</code> and you had created a repo named <code>test</code> .<br />In your local terminal you could do <code>git clone http://localhost:8000/git/test</code> and it would clone the repository (it would be blank if you hadn’t pushed anything).<br />You can now do git commit and push the changes, the changes would now be present on your remote repository.</p>
<p><strong>References:</strong></p>
<p>To read more in-depth about git I’d recommend the official docs (for in-depth)/Atlassian’s version (for brief intro)</p>
<p>Some more in-depth resources below:</p>
<p><a target="_blank" href="https://git-scm.com/book/en/v2/Git-on-the-Server-The-Protocols"><strong>Git - The Protocols</strong></a></p>
<p><a target="_blank" href="https://git-scm.com/docs/pack-protocol"><strong>Git - pack-protocol Documentation</strong></a></p>
<h3 id="heading-next-steps"><strong>Next Steps</strong></h3>
<h4 id="heading-git-hooks-based-events"><strong>Git Hooks based events</strong></h4>
<p>I modified my server to setup a WebSocket connection on a route which looks like <code>http://localhost:8000/ws/repo-name/</code> . This would now create a per repo based WebSocket connection between the clients and our server.<br />I could then use this connection to push various information to clients.</p>
<p>An example of such use-case is below:</p>
<p>Together with Git hooks (<code>post-receive</code>) I can now push messages on the WebSocket Connection to the clients whenever a new commit/ref is updated on the remote repository.<br />This would allow me to setup a front-end in future which can listen to these events and show a message to the user.</p>
<p>You can read more about the Git hooks here</p>
<p><a target="_blank" href="https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks"><strong>Git - Git Hooks</strong></a></p>
<p>For making the above feature you would need to setup <strong>Server Side Hooks</strong>.</p>
<h4 id="heading-authentication"><strong>Authentication</strong></h4>
<p>To implement basic authentication where a different user can not push to another user’s repository you can use the <code>pre-receive</code> git hook on server side and add the authentication logic and block the operations if access is denied. (<em>There may be other ways to implement this feature this is something I thought of and haven’t implemented yet.</em>)</p>
<blockquote>
<p>This is my first-time writing a blog post, feedback is appreciated.</p>
<p>Thanks.</p>
</blockquote>
]]></content:encoded></item></channel></rss>