<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[The Elemental : Tech]]></title><description><![CDATA[The Elemental : Tech]]></description><link>https://tech.abhishekpatil.blog</link><generator>RSS for Node</generator><lastBuildDate>Wed, 08 Apr 2026 11:37:20 GMT</lastBuildDate><atom:link href="https://tech.abhishekpatil.blog/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Setting Up Claude Code to Work with OpenRouter Using Claude Code Router]]></title><description><![CDATA[Claude Code Router (CCR) is a lightweight CLI-based code interpreter and dev assistant interface designed for local LLM routing. It supports multiple model providers, including OpenRouter, and allows dynamic model selection for tasks like coding, thi...]]></description><link>https://tech.abhishekpatil.blog/claude-code-with-openrouter-using-claude-code-router</link><guid isPermaLink="true">https://tech.abhishekpatil.blog/claude-code-with-openrouter-using-claude-code-router</guid><category><![CDATA[claude code router]]></category><category><![CDATA[claude-code]]></category><category><![CDATA[openrouter]]></category><category><![CDATA[opensource]]></category><category><![CDATA[#qwen]]></category><category><![CDATA[Kimi]]></category><dc:creator><![CDATA[Abhishek Patil]]></dc:creator><pubDate>Wed, 30 Jul 2025 20:25:25 GMT</pubDate><content:encoded><![CDATA[<p><strong>Claude Code Router (CCR)</strong> is a lightweight CLI-based code interpreter and dev assistant interface designed for local LLM routing. It supports multiple model providers, including OpenRouter, and allows dynamic model selection for tasks like coding, thinking, and long-context interactions.</p>
<p>If you want to use <strong>Claude Code</strong> with <strong>OpenRouter</strong> models like Qwen3 Coder or Kimi-K2 Pro this guide walks you through the setup.</p>
<h3 id="heading-prerequisites"><strong>Prerequisites</strong></h3>
<ul>
<li><p>Node.js (v18+)</p>
</li>
<li><p>A working terminal or shell (e.g., bash, zsh)</p>
</li>
<li><p>An OpenRouter API key (from <a target="_blank" href="https://openrouter.ai">https://openrouter.ai</a>)</p>
<p>  <em>Note that you must have bought at least $10 credits to be able to use even the :free quoted model.</em></p>
</li>
</ul>
<p>Find the entire configuration on <a target="_blank" href="https://claudecoderouter.com/#how-to-use">https://claudecoderouter.com/</a>, although I must admit, I’ve spent close to a week going around circles trying to work it out, which nearly made me want to tear off my face 😵.</p>
<p>That’s when I came across AI Oriented Dev's youtube video, shoutout to him 🙌.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=EkNfythQNRg&amp;t=1084s">https://www.youtube.com/watch?v=EkNfythQNRg&amp;t=1084s</a></div>
<p> </p>
<p>I discovered one key difference in how he approached it:</p>
<p>Before using Claude for coding help, it is to make sure to <strong>start the local router service</strong>:</p>
<pre><code class="lang-bash">claude start
</code></pre>
<p>Here is the complete installation process:</p>
<ol>
<li>Install Claude Code Router</li>
</ol>
<pre><code class="lang-bash">npm install -g @musistudio/claude-code-router
</code></pre>
<ol start="2">
<li><p>Configure the <code>config.json</code> file which needs to be located at <code>~/.claude-code-router</code>. You are required to create it, if not present already.</p>
<p> Here is the exact <code>config.json</code> file I’m using:</p>
</li>
</ol>
<pre><code class="lang-json">{
  <span class="hljs-attr">"Providers"</span>: [
    {
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"openrouter"</span>,
      <span class="hljs-attr">"api_base_url"</span>: <span class="hljs-string">"https://openrouter.ai/api/v1/chat/completions"</span>,
      <span class="hljs-attr">"api_key"</span>: <span class="hljs-string">"sk-openrouter-api-key"</span>,
      <span class="hljs-attr">"models"</span>: [
        <span class="hljs-string">"anthropic/claude-3.5-sonnet"</span>,
        <span class="hljs-string">"google/gemini-2.5-pro-preview"</span>,
        <span class="hljs-string">"moonshotai/kimi-k2"</span>,
    <span class="hljs-string">"qwen/qwen3-coder:free"</span>
      ],
      <span class="hljs-attr">"transformer"</span>: {
        <span class="hljs-attr">"use"</span>: [<span class="hljs-string">"openrouter"</span>]
      }
    }
  ],
  <span class="hljs-attr">"Router"</span>: {
    <span class="hljs-attr">"default"</span>: <span class="hljs-string">"openrouter,qwen/qwen3-coder:free"</span>,
    <span class="hljs-comment">// "default": "openrouter,qwen/qwen3-coder",</span>
    <span class="hljs-comment">// "default": "openrouter,moonshotai/kimi-k2",</span>
    <span class="hljs-comment">// "default": "openrouter,moonshotai/kimi-k2:free",</span>

    <span class="hljs-attr">"background"</span>: <span class="hljs-string">"openrouter,qwen/qwen3-coder:free"</span>,
    <span class="hljs-attr">"think"</span>: <span class="hljs-string">"openrouter,qwen/qwen3-coder:free"</span>,
    <span class="hljs-attr">"longContext"</span>: <span class="hljs-string">"openrouter,qwen/qwen3-coder:free"</span>
  },
  <span class="hljs-attr">"API_TIMEOUT_MS"</span>: <span class="hljs-number">600000</span>,
  <span class="hljs-attr">"LOG"</span>: <span class="hljs-literal">true</span>
}
</code></pre>
<p>You might want to add the additionals fields:</p>
<p><code>API_TIMEOUT_MS=60000</code>, which sets the <strong>timeout duration to</strong> 60,000ms, or 10 minutes for API requests. It is particularly useful when using <strong>slow or large models</strong>, which may take longer for complex completions.</p>
<p><code>LOG=true</code>, which enables logging.</p>
<ol start="3">
<li>Start the local router service. This is the step that made all the difference, and it’s somehow now mentioned in the documentation as of this moments</li>
</ol>
<pre><code class="lang-bash">claude start
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753906402883/4d694b8c-54e0-45dd-8435-9240a366dd83.png" alt class="image--center mx-auto" /></p>
<ol start="4">
<li>Once the router is up and running, you can jump into coding mode:</li>
</ol>
<pre><code class="lang-bash">claude code
</code></pre>
<p>And voila, it's working as expected!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753906354169/46c16753-16aa-4736-98b1-be2097998579.png" alt class="image--center mx-auto" /></p>
<p>That’s it from me! Let me know if you run into any issues- I’ve probably hit the same ones 😄</p>
<p>Cheers!</p>
]]></content:encoded></item><item><title><![CDATA[Setting Up Claude Code to Work with OpenRouter Using LiteLLM]]></title><description><![CDATA[When we first install Claude Code CLI, it’s hardwired to talk to Anthropic’s API. But what if you want to use open-source models instead like Qwen or Kimi using OpenRouter?
That’s where LiteLLM comes in. It acts as a local proxy that can translate Cl...]]></description><link>https://tech.abhishekpatil.blog/setting-up-claude-code-to-work-with-openrouter-using-litellm</link><guid isPermaLink="true">https://tech.abhishekpatil.blog/setting-up-claude-code-to-work-with-openrouter-using-litellm</guid><category><![CDATA[AI]]></category><category><![CDATA[claude-code]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[Kimi]]></category><category><![CDATA[#qwen]]></category><dc:creator><![CDATA[Abhishek Patil]]></dc:creator><pubDate>Thu, 24 Jul 2025 18:30:00 GMT</pubDate><content:encoded><![CDATA[<p>When we first install Claude Code CLI, it’s hardwired to talk to Anthropic’s API. But what if you want to use open-source models instead like Qwen or Kimi using OpenRouter?</p>
<p>That’s where <a target="_blank" href="https://github.com/BerriAI/litellm">LiteLLM</a> comes in. It acts as a local proxy that can translate Claude Code’s Anthropic-style requests into whatever OpenRouter (or any other provider) expects.</p>
<p>In this post, I’ll walk through how I got Claude Code to work with OpenRouter’s qwen/qwen3-coder: free model using LiteLLM along with all the missteps I made, and what finally worked.</p>
<hr />
<h2 id="heading-what-were-setting-up"><strong>What We’re Setting Up</strong></h2>
<ul>
<li><p>Claude Code CLI installed locally</p>
</li>
<li><p>LiteLLM proxy server running on localhost</p>
</li>
<li><p>Requests from Claude CLI routed to OpenRouter</p>
</li>
<li><p>Fake Claude model names (like claude-3.5-sonnet) mapped to real OpenRouter models behind the scenes</p>
</li>
</ul>
<hr />
<h2 id="heading-step-1-install-dependencies"><strong>Step 1: Install Dependencies</strong></h2>
<p>We’ll <strong>LiteLLM</strong> as a local proxy.</p>
<p>Install it via pip:</p>
<pre><code class="lang-plaintext">python3 -m venv LiteLLM
pip3 install "litellm[proxy]"
</code></pre>
<hr />
<h2 id="heading-step-2-set-environment-variables"><strong>Step 2: Set Environment Variables</strong></h2>
<p>Set your secrets and endpoint overrides in your shell config (~/.zshrc or ~/.bashrc):</p>
<pre><code class="lang-plaintext">export LITELLM_MASTER_KEY=sk-1234      # This can be anything
export OPENROUTER_API_KEY=sk-your-openrouter-key

export ANTHROPIC_BASE_URL=http://localhost:4000
export ANTHROPIC_AUTH_TOKEN=$LITELLM_MASTER_KEY
</code></pre>
<p>Then reload your shell config:</p>
<pre><code class="lang-plaintext">source ~/.zshrc
</code></pre>
<hr />
<h2 id="heading-step-3-create-your-litellm-config"><strong>Step 3: Create Your LiteLLM Config</strong></h2>
<p>Save the following YAML into config.yaml. This is what tells LiteLLM to act like it’s Anthropic, but actually forward requests to OpenRouter’s Qwen model.</p>
<pre><code class="lang-plaintext">model_list:
  - model_name: "claude-3.5-sonnet"
    litellm_params:
      model: "openrouter/qwen/qwen3-coder:free"
      model_provider: openrouter
      api_key: os.environ/OPENROUTER_API_KEY
      api_base: https://openrouter.ai/api/v1

litellm_settings:
  master_key: os.environ/LITELLM_MASTER_KEY
  database_type: none
</code></pre>
<p>Note that LiteLLM tends to be annoyingly pushy to make us use database. Hence, its important to add this field <code>database_type: none</code> .</p>
<p>Also, the trick here is in model_name. Claude Code thinks it’s calling claude-3.5-sonnet, but our proxy maps that name to Qwen.</p>
<hr />
<h2 id="heading-step-4-start-the-litellm-proxy"><strong>Step 4: Start the LiteLLM Proxy</strong></h2>
<p>Run the proxy server:</p>
<pre><code class="lang-plaintext">litellm --config config.yaml
</code></pre>
<p>To make sure it’s working:</p>
<pre><code class="lang-plaintext">curl http://localhost:4000/health \
  -H "Authorization: Bearer $LITELLM_MASTER_KEY"
</code></pre>
<p>You should see a healthy status with your model name listed.</p>
<p>You can also visit http://localhost:4000/health to ensure working.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753479929502/0c7606ae-dd9e-4dd3-8d99-8d3a550f2366.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-step-5-use-claude-cli"><strong>Step 5: Use Claude CLI</strong></h2>
<p>Now that everything is in place, launch Claude with:</p>
<pre><code class="lang-plaintext">claude --model claude-3.5-sonnet
</code></pre>
<p>If everything went well, Claude will start, but it’ll actually be using OpenRouter’s Qwen model behind the scenes.</p>
<hr />
<h2 id="heading-mistakes-i-made"><strong>Mistakes I Made</strong></h2>
<h3 id="heading-1-the-model-name-didnt-match"><strong>1. The model name didn’t match</strong></h3>
<p>At one point, LiteLLM showed the model as /claude-3.5-sonnet instead of claude-3.5-sonnet. That happened because I accidentally added a leading slash. Claude CLI failed with a “model not found” error.</p>
<p><strong>Fix</strong>: Make sure the <code>model_name</code> you enter in the <code>config.yaml</code> matches with the field you enter with the <code>claude --model claude-3.5-sonnet</code></p>
<pre><code class="lang-plaintext">model_name: "claude-3.5-sonnet"
</code></pre>
<h3 id="heading-2-the-wrong-model-id-for-openrouter"><strong>2. The wrong model ID for OpenRouter</strong></h3>
<p>I tried using "moonshot/kimi-k2:free" which OpenRouter didn’t support. It threw a model not found error.</p>
<p><strong>Fix</strong>: Use <code>curl https://openrouter.ai/api/v1/models</code> with your API key to check which models are available and free.</p>
<h3 id="heading-3-database-errors"><strong>3. Database errors</strong></h3>
<p>At one point I saw:</p>
<pre><code class="lang-plaintext">ModuleNotFoundError: No module named 'prisma'
</code></pre>
<p>This happened when I tried running features that require LiteLLM’s database integration, without setting up a DB.</p>
<p><strong>Fix</strong>: Add this to your config to disable DB usage:</p>
<pre><code class="lang-plaintext">database_type: none
</code></pre>
<h3 id="heading-4-no-api-key-provided-even-though-i-had-one"><strong>4. “No API key provided” even though I had one</strong></h3>
<p>This was just a case of forgetting to run:</p>
<pre><code class="lang-plaintext">source ~/.zshrc
</code></pre>
<p>after setting environment variables. Don’t skip that step.</p>
<h2 id="heading-wrapping-up"><strong>Wrapping Up</strong></h2>
<p>Now I can run Claude CLI locally, while routing it through any OpenRouter-supported model, all while keeping the Anthropic API format that Claude Code expects.</p>
<p>This setup is super flexible. You could point Claude to Mistral, Moonshot, or even your own LLM behind a custom endpoint as long as LiteLLM can proxy to it.</p>
<p>There go 4 hours of my life I’ll neve get back :)</p>
]]></content:encoded></item><item><title><![CDATA[Idea to Execution: The Product Planning Paradigm]]></title><description><![CDATA[I have always faced dilemmas while starting tech projects as to where should I begin.
Quite often, I experience a lightbulb💡 moment with a “groundbreaking“ idea. It creates a need and drive to execute it immediately. That’s when I am at the “Peak of...]]></description><link>https://tech.abhishekpatil.blog/idea-to-execution-the-product-planning-paradigm</link><guid isPermaLink="true">https://tech.abhishekpatil.blog/idea-to-execution-the-product-planning-paradigm</guid><category><![CDATA[System Design]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[app development]]></category><category><![CDATA[product development]]></category><dc:creator><![CDATA[Abhishek Patil]]></dc:creator><pubDate>Wed, 27 Nov 2024 09:28:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1732702414737/45eb2972-141d-4a31-8f44-adaf5b70265d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I have always faced dilemmas while starting tech projects as to where should I begin.</p>
<p>Quite often, I experience a lightbulb💡 moment with a “groundbreaking“ idea. It creates a need and drive to execute it immediately. That’s when I am at the <strong>“Peak of Enthusiasm“</strong> phase of the Dunning Kruger Effect. I believe this initial phase is very important, as it sets the tone for how the project will be perceived in your mind over the next few days, directly affecting whether you will complete it or deem it unworthy of further effort (<strong>“Valley of Despair“</strong>).</p>
<p><a target="_blank" href="https://www.manchesterdigital.com/post/bridcon-business-and-management-consulting/the-dunning-kruger-effect-on-start-up-businesses"><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732695868181/23b2f06c-eba4-47ce-acb5-6d2a8f904355.jpeg" alt class="image--center mx-auto" /></a></p>
<p>Hence, it is crucial to make the initial launch of the product to an audience of one—the creator’s mind—in such a way that it is received with excitement, simplicity, and most importantly, devoid of friction.</p>
<p>The processes described ahead are designed to reduce, or rather eliminate, any form of cognitive friction and make the execution seamless.</p>
<p><em>I will be updating the paradigm in several versions.</em></p>
<hr />
<h3 id="heading-version-10">Version 1.0</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736601000076/cdc7e52a-3b2e-437f-ad70-8838050fa511.png" alt class="image--center mx-auto" /></p>
<p><strong>Key features:</strong></p>
<ul>
<li><p>The paradigm <strong>starts with the most fundamental aspects</strong> and moves towards more technical processes, offering the creator clarity from the very beginning.</p>
</li>
<li><p>The paradigm is broken into 6 stages of development:</p>
<ol>
<li><p>Project planning</p>
</li>
<li><p>Designing</p>
</li>
<li><p>Configurations</p>
</li>
<li><p>Programming</p>
</li>
<li><p>Testing</p>
</li>
<li><p>Deployment</p>
</li>
</ol>
</li>
<li><p>Each step is independent and self-contained, ensuring that creators can focus on one aspect of the project at a time.</p>
</li>
</ul>
<hr />
<p>That’s all for now.</p>
<p>Cheers🍻, until next time…</p>
<p>✨<em>This is an AI-augmented article.</em></p>
]]></content:encoded></item><item><title><![CDATA[Understanding Database Containerization]]></title><description><![CDATA[No local MySQL installation necessary

Setting up MySQL with Docker

Download MySQL image

$ docker pull mysql:latest


Confirm image

$ docker images


Create a container

$ docker run -d --name test-mysql -e MYSQL_ROOT_PASSWORD=strong_password -p 3...]]></description><link>https://tech.abhishekpatil.blog/understanding-database-containerization</link><guid isPermaLink="true">https://tech.abhishekpatil.blog/understanding-database-containerization</guid><category><![CDATA[#LearninPublic]]></category><category><![CDATA[Docker]]></category><category><![CDATA[database]]></category><dc:creator><![CDATA[Abhishek Patil]]></dc:creator><pubDate>Tue, 30 Jul 2024 18:31:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1722366357076/0911587a-20c3-4e7e-8167-e782abca8b3d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>No local MySQL installation necessary</strong></p>
<hr />
<h2 id="heading-setting-up-mysql-with-docker">Setting up MySQL with Docker</h2>
<ol>
<li><p>Download MySQL image</p>
<ul>
<li><code>$ docker pull mysql:latest</code></li>
</ul>
</li>
<li><p>Confirm image</p>
<ul>
<li><code>$ docker images</code></li>
</ul>
</li>
<li><p>Create a container</p>
<ul>
<li><p><code>$ docker run -d --name test-mysql -e MYSQL_ROOT_PASSWORD=strong_password -p 3307:3306 mysql</code></p>
<ul>
<li><p><code>run</code>: creates a new container or starts an existing one</p>
</li>
<li><p><code>--name CONTAINER_NAME</code>: gives the container a name. The name should be readable and short. In our case, the name is <code>test-mysql</code>.</p>
</li>
<li><p><code>-e ENV_VARIABLE=value</code>: the -e tag creates an environment variable that will be accessible within the container. It is crucial to set <code>MYSQL_ROOT_PASSWORD</code> so that we can run SQL commands later from the container. Make sure to store your strong password somewhere safe (not your brain).</p>
</li>
<li><p><code>-d</code>: short for detached, the <code>-d</code> tag makes the container run in the background. If you remove this tag, the command will keep printing logs until the container stops.</p>
</li>
<li><p><code>image_name</code>: the final argument is the image name the container will be built from. In this case, our image is <code>mysql</code>.</p>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<h2 id="heading-establish-connection-between-mysql-container-and-vs-code-on-local">Establish connection between MySQL container and VS Code on local:</h2>
<ol>
<li><p>Ensure that root user of the table <code>mysql.user</code> has an entry for <code>host</code> else, add with <code>update mysql.user set host='%' where user='root';</code></p>
<ul>
<li><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722364033923/e93ac8db-54a8-4fb6-b3af-0513fdf8c0f2.png" alt /></li>
</ul>
</li>
<li><p>Connect on your client (MySQL Client, VS Code, etc) with the following parameters:</p>
<ul>
<li><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722364054707/96a9a839-f409-4106-bbe1-fc2241f08854.png" alt /></li>
</ul>
</li>
</ol>
<p>The database should now connect to the client.</p>
<hr />
<h2 id="heading-questions-i-faced">Questions I faced</h2>
<p><em>answered by Google Gemini</em></p>
<ol>
<li><p><strong>When I see the port of MySQL on docker with</strong><code>$ docker port test-mysql</code><strong>, we get</strong><code>3306/tcp -&gt; 0.0.0.0:3307</code><strong>. But to connect to this from my local, I need to use the IP 127.0.0.1:3307. Why?</strong></p>
<ul>
<li><p>Listening on 0.0.0.0, it indicates that it's willing to accept connections from any network interface on the host.</p>
</li>
<li><p>In the context of <code>docker port</code>, 0.0.0.0 represents the Docker host's network interface. This means the MySQL service inside the container is listening on port 3306, and Docker is exposing it to the outside world through port 3307.</p>
</li>
<li><p>127.0.0.1 on the other hand is a loopback address referring to the current machine itself (<a target="_blank" href="http://localhost">localhost</a>)</p>
</li>
</ul>
</li>
<li><p><strong>"When I checked the IP address of the docker container, it was something like 174... So why can I not use that IP, and have to use 127.0.0.1, or</strong><a target="_blank" href="http://localhost"><strong>localhost</strong></a><strong>?"</strong></p>
<ul>
<li><p><strong>Network Isolation:</strong> Docker containers are typically isolated from the host network- and <strong>are for internal use of Docker</strong>. The IP address you see is internal to the Docker network and not accessible from the host directly.</p>
</li>
<li><p>"Also, when MySQL's port is 3306, why do many people do the port mapping from 3306 of docker to 3307 of the local?"</p>
</li>
<li><p><strong>Avoiding Conflicts</strong></p>
<ul>
<li><p><strong>Existing services:</strong> If there's already a service running on port 3306 on your host machine, mapping it to a different port prevents conflicts.</p>
</li>
<li><p><strong>Multiple MySQL instances:</strong> If you're running multiple MySQL instances, using different ports helps differentiate them.</p>
</li>
</ul>
</li>
<li><p><strong>Security</strong></p>
<ul>
<li><strong>Reducing attack surface:</strong> Some security experts argue that using a non-standard port can deter potential attackers who might target the default port. However, this alone doesn't guarantee security; it's just an additional layer.</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Why is necessary to have an entry of root:% in user:host of mysql.user table to connect to serve data outside the Docker container?"</strong></p>
<ul>
<li><p>The <code>root:%</code> entry in the <code>mysql.user</code> table signifies a MySQL root user account that can be accessed from any host (represented by the <code>%</code> wildcard). This configuration is often the default setup in MySQL installations.</p>
</li>
<li><p><strong>It is a security risk</strong> -- <mark>⚠️ Careful on the PROD</mark></p>
</li>
<li><p>Alternate appraoch</p>
<ul>
<li><p><strong>Create specific user accounts:</strong> For example, <a target="_blank" href="mailto:user1@localhost"><code>user1@localhost</code></a>, <code>user2@192.168.1.100</code>.</p>
</li>
<li><p><strong>Use IP address restrictions:</strong> Grant access to specific IP addresses instead of <code>%</code>.</p>
</li>
<li><p><strong>Leverage hostnames:</strong> Use hostnames to restrict access to trusted hosts.</p>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<hr />
<h2 id="heading-resources">Resources</h2>
<ol>
<li><p><a target="_blank" href="https://www.datacamp.com/tutorial/set-up-and-configure-mysql-in-docker">Article</a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/watch?v=X8W5Xq9e2Os&amp;t=229s">YouTube Video</a></p>
</li>
</ol>
]]></content:encoded></item></channel></rss>