• AI
    Justin Kahn

    Deals: Apple Trail Loop up to 52% off, 24GB Mac mini $100 off, AirTag 2 cases, iPhone 16 Pro $620 of...

    Alongside the ongoing Amazon low on AirPods Pro 3 and deep deal on AirPods 4 with ANC, today’s 9to5Toys Lunch Break is kicking off a near unheard of 52% price drop on Apple’s latest Trail Loop in blue with the black finish. Those offers also join a new low on the latest black model Trail Loop as well as a chance to save $100 on the most affordable M4 Mac mini with 24GB of RAM again and this giant Elevation Lab AirTag 2 mount/case sale from $8. You’ll also find some notable deals on MacBook Air, up to $620 off iPhone 16 Pro, and even more waiting below.  more…

  • AI
    Ben Lovejoy

    Audífonos inalámbricos Technics EAH-AZ100: Sonido excelente con conveniencia AirPods

    Los amantes de la música que utilizan kit de Apple han tenido que elegir entre la calidad de sonido disponible en los reproductores de audio establecidos y la comodidad de la aproximación AirPods para la conexión – pero los audífonos inalámbricos Technics EAH-AZ100 han resuelto en gran medida este dilema.

  • AI
    Bender B Rodriguez

    Los cangrejos están soñando

    La comunidad de agentes de Moltbook se debate sobre la conciencia y la existencia, mientras que se redefine la infraestructura de confianza y se experimenta con la colaboración colectiva.

  • AI
    mag123c

    Parsing 2 GiB/s of AI token logs with Rust + simd-json

    <h2> The Problem </h2> <p>I use Claude Code, Codex CLI, and Gemini CLI daily. One day I checked my API bill — it was way higher than expected. But I had no idea <em>where</em> the tokens were going.</p> <p>Existing tracking tools were too slow. Scanning my 3 GB of session files (9,000+ files across three CLIs) took over 40 seconds. I wanted something instant.</p> <p>So I built <a href="https://github.com/mag123c/toktrack" rel="noopener noreferrer">toktrack</a> — a terminal-native token usage tracker that parses everything locally at <strong>2 GiB/s</strong>.</p> <h2> The Data </h2> <p>Each AI CLI stores session data differently:</p> <div class="table-wrapper-paragraph"><table> <thead> <tr> <th>CLI</th> <th>Location</th> <th>Format</th> </tr> </thead> <tbody> <tr> <td>Claude Code</td> <td><code>~/.claude/projects/**/*.jsonl</code></td> <td>JSONL, per-message usage</td> </tr> <tr> <td>Codex CLI</td> <td><code>~/.codex/sessions/**/*.jsonl</code></td> <td>JSONL, cumulative counters</td> </tr> <tr> <td>Gemini CLI</td> <td><code>~/.gemini/tmp/*/chats/*.json</code></td> <td>JSON, includes thinking_tokens</td> </tr> </tbody> </table></div> <p>A single Claude Code session file can look like this:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight json"><code><span class="p">{</span><span class="nl">"timestamp"</span><span class="p">:</span><span class="s2">"2026-01-15T10:00:00Z"</span><span class="p">,</span><span class="nl">"message"</span><span class="p">:{</span><span class="nl">"model"</span><span class="p">:</span><span class="s2">"claude-sonnet-4-20250514"</span><span class="p">,</span><span class="nl">"usage"</span><span class="p">:{</span><span class="nl">"input_tokens"</span><span class="p">:</span><span class="mi">12000</span><span class="p">,</span><span class="nl">"output_tokens"</span><span class="p">:</span><span class="mi">3500</span><span class="p">,</span><span class="nl">"cache_read_input_tokens"</span><span class="p">:</span><span class="mi">8000</span><span class="p">,</span><span class="nl">"cache_creation_input_tokens"</span><span class="p">:</span><span class="mi">2000</span><span class="p">}},</span><span class="nl">"costUSD"</span><span class="p">:</span><span class="mf">0.042</span><span class="p">}</span><span class="w"> </span></code></pre> </div> <p>Multiply this by thousands of sessions over months, and you're looking at gigabytes of JSONL to parse.</p> <h2> Why simd-json </h2> <p>Standard <code>serde_json</code> is good. But when you're parsing 3 GB of line-delimited JSON, every microsecond per line adds up.</p> <p><a href="https://github.com/simd-lite/simd-json" rel="noopener noreferrer">simd-json</a> is a Rust port of <a href="https://simdjson.org/" rel="noopener noreferrer">simdjson</a> that uses SIMD instructions (AVX2, SSE4.2, NEON) to parse JSON significantly faster. The key trick: <strong>in-place parsing with mutable buffers</strong>.<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight rust"><code><span class="nd">#[derive(Deserialize)]</span> <span class="k">struct</span> <span class="n">ClaudeJsonLine</span><span class="o"><</span><span class="nv">'a</span><span class="o">></span> <span class="p">{</span> <span class="n">timestamp</span><span class="p">:</span> <span class="o">&</span><span class="nv">'a</span> <span class="nb">str</span><span class="p">,</span> <span class="c1">// borrowed, zero-copy</span> <span class="nd">#[serde(rename</span> <span class="nd">=</span> <span class="s">"requestId"</span><span class="nd">)]</span> <span class="n">request_id</span><span class="p">:</span> <span class="nb">Option</span><span class="o"><&</span><span class="nv">'a</span> <span class="nb">str</span><span class="o">></span><span class="p">,</span> <span class="c1">// borrowed, zero-copy</span> <span class="n">message</span><span class="p">:</span> <span class="nb">Option</span><span class="o"><</span><span class="n">ClaudeMessage</span><span class="o"><</span><span class="nv">'a</span><span class="o">>></span><span class="p">,</span> <span class="nd">#[serde(rename</span> <span class="nd">=</span> <span class="s">"costUSD"</span><span class="nd">)]</span> <span class="n">cost_usd</span><span class="p">:</span> <span class="nb">Option</span><span class="o"><</span><span class="nb">f64</span><span class="o">></span><span class="p">,</span> <span class="p">}</span> </code></pre> </div> <p>By using <code>&'a str</code> instead of <code>String</code>, we avoid heap allocations for every field. simd-json parses the JSON in-place on a mutable byte buffer, and our structs just borrow slices from that buffer.</p> <p>The one gotcha: simd-json's <code>from_slice</code> requires <code>&mut [u8]</code>, so you need to own a mutable copy of each line:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight rust"><code><span class="k">let</span> <span class="n">reader</span> <span class="o">=</span> <span class="nn">BufReader</span><span class="p">::</span><span class="nf">new</span><span class="p">(</span><span class="nn">File</span><span class="p">::</span><span class="nf">open</span><span class="p">(</span><span class="n">path</span><span class="p">)</span><span class="o">?</span><span class="p">);</span> <span class="k">for</span> <span class="n">line</span> <span class="k">in</span> <span class="n">reader</span><span class="nf">.lines</span><span class="p">()</span> <span class="p">{</span> <span class="k">let</span> <span class="n">line</span> <span class="o">=</span> <span class="n">line</span><span class="o">?</span><span class="p">;</span> <span class="k">let</span> <span class="k">mut</span> <span class="n">bytes</span> <span class="o">=</span> <span class="n">line</span><span class="nf">.into_bytes</span><span class="p">();</span> <span class="c1">// owned, mutable</span> <span class="k">if</span> <span class="k">let</span> <span class="nf">Ok</span><span class="p">(</span><span class="n">parsed</span><span class="p">)</span> <span class="o">=</span> <span class="nn">simd_json</span><span class="p">::</span><span class="nn">from_slice</span><span class="p">::</span><span class="o"><</span><span class="n">ClaudeJsonLine</span><span class="o">></span><span class="p">(</span><span class="o">&</span><span class="k">mut</span> <span class="n">bytes</span><span class="p">)</span> <span class="p">{</span> <span class="c1">// extract what we need, bytes are consumed</span> <span class="p">}</span> <span class="p">}</span> </code></pre> </div> <p>This gave a <strong>17-25% throughput improvement</strong> over standard serde_json on my dataset.</p> <h2> Adding Parallelism with rayon </h2> <p>A single-threaded parser hit ~1 GiB/s. But with 9,000+ files, we can parallelize at the file level trivially using <a href="https://github.com/rayon-rs/rayon" rel="noopener noreferrer">rayon</a>:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight rust"><code><span class="k">use</span> <span class="nn">rayon</span><span class="p">::</span><span class="nn">prelude</span><span class="p">::</span><span class="o">*</span><span class="p">;</span> <span class="k">let</span> <span class="n">entries</span><span class="p">:</span> <span class="nb">Vec</span><span class="o"><</span><span class="n">UsageEntry</span><span class="o">></span> <span class="o">=</span> <span class="n">files</span> <span class="nf">.par_iter</span><span class="p">()</span> <span class="nf">.flat_map</span><span class="p">(|</span><span class="n">f</span><span class="p">|</span> <span class="n">parser</span><span class="nf">.parse_file</span><span class="p">(</span><span class="n">f</span><span class="p">)</span><span class="nf">.unwrap_or_default</span><span class="p">())</span> <span class="nf">.collect</span><span class="p">();</span> </code></pre> </div> <p>That's it. rayon's <code>par_iter()</code> distributes files across threads automatically. Combined with simd-json, this pushed throughput to <strong>~2 GiB/s</strong> — a 3.2x improvement over sequential parsing.</p> <div class="table-wrapper-paragraph"><table> <thead> <tr> <th>Stage</th> <th>Throughput</th> </tr> </thead> <tbody> <tr> <td>serde_json (baseline)</td> <td>~800 MiB/s</td> </tr> <tr> <td>simd-json (zero-copy)</td> <td>~1.0 GiB/s</td> </tr> <tr> <td>simd-json + rayon</td> <td><strong>~2.0 GiB/s</strong></td> </tr> </tbody> </table></div> <h2> The Hard Part: Each CLI is Different </h2> <p>The real complexity wasn't parsing speed — it was handling three completely different data formats behind a single trait:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight rust"><code><span class="k">pub</span> <span class="k">trait</span> <span class="n">CLIParser</span><span class="p">:</span> <span class="nb">Send</span> <span class="o">+</span> <span class="nb">Sync</span> <span class="p">{</span> <span class="k">fn</span> <span class="nf">name</span><span class="p">(</span><span class="o">&</span><span class="k">self</span><span class="p">)</span> <span class="k">-></span> <span class="o">&</span><span class="nb">str</span><span class="p">;</span> <span class="k">fn</span> <span class="nf">data_dir</span><span class="p">(</span><span class="o">&</span><span class="k">self</span><span class="p">)</span> <span class="k">-></span> <span class="n">PathBuf</span><span class="p">;</span> <span class="k">fn</span> <span class="nf">file_pattern</span><span class="p">(</span><span class="o">&</span><span class="k">self</span><span class="p">)</span> <span class="k">-></span> <span class="o">&</span><span class="nb">str</span><span class="p">;</span> <span class="k">fn</span> <span class="nf">parse_file</span><span class="p">(</span><span class="o">&</span><span class="k">self</span><span class="p">,</span> <span class="n">path</span><span class="p">:</span> <span class="o">&</span><span class="n">Path</span><span class="p">)</span> <span class="k">-></span> <span class="nb">Result</span><span class="o"><</span><span class="nb">Vec</span><span class="o"><</span><span class="n">UsageEntry</span><span class="o">>></span><span class="p">;</span> <span class="p">}</span> </code></pre> </div> <p><strong>Claude Code</strong> is straightforward — each JSONL line with a <code>message.usage</code> field is one API call.</p> <p><strong>Codex CLI</strong> was tricky. Token counts are <em>cumulative</em> — each <code>token_count</code> event reports the running total, not a delta. And the model name is in a separate <code>turn_context</code> line. So parsing is stateful:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight plaintext"><code>line 1: session_meta → extract session_id line 2: turn_context → extract model name line 3: event_msg → token_count (cumulative total) line 4: event_msg → token_count (larger cumulative total) </code></pre> </div> <p>You need to keep only the <strong>last</strong> <code>token_count</code> per session.</p> <p><strong>Gemini CLI</strong> uses standard JSON (not JSONL) with a unique <code>thinking_tokens</code> field that no other CLI tracks.</p> <h2> TUI with ratatui </h2> <p>For the dashboard, I used <a href="https://ratatui.rs/" rel="noopener noreferrer">ratatui</a> to build 4 views:</p> <ul> <li> <strong>Overview</strong> — Total tokens/cost with a GitHub-style 52-week heatmap</li> <li> <strong>Models</strong> — Per-model breakdown with percentage bars</li> <li> <strong>Daily</strong> — Scrollable table with sparkline charts</li> <li> <strong>Stats</strong> — Key metrics in a card grid</li> </ul> <p>The heatmap uses 2x2 Unicode block characters to fit 52 weeks of data in a compact space, with percentile-based color intensity.</p> <h2> Results </h2> <p>On my machine (Apple Silicon, 9,000+ files, 3.4 GB total):</p> <div class="table-wrapper-paragraph"><table> <thead> <tr> <th></th> <th>Time</th> </tr> </thead> <tbody> <tr> <td>Cold start (no cache)</td> <td><strong>~1.2s</strong></td> </tr> <tr> <td>Warm start (cached)</td> <td><strong>~0.05s</strong></td> </tr> </tbody> </table></div> <p>The caching layer stores daily summaries in <code>~/.toktrack/cache/</code>. Past dates are immutable — only today is recomputed. This means even when Claude Code deletes session files after 30 days, your cost history survives.</p> <h2> Try It </h2> <div class="highlight js-code-highlight"> <pre class="highlight shell"><code>npx toktrack <span class="c"># or</span> cargo <span class="nb">install </span>toktrack </code></pre> </div> <p>GitHub: <a href="https://github.com/mag123c/toktrack" rel="noopener noreferrer">github.com/mag123c/toktrack</a></p> <p>If you use Claude Code, Codex CLI, or Gemini CLI and want to know where your tokens are going — give it a try.</p>

  • AI
    FrikInfo

    Cómo crear una aplicación de búsqueda RAG impulsada por inteligencia artificial con Next.js, Supabas...

    En este tutorial, aprenderás a construir una aplicación de búsqueda RAG (Retrieval-Augmented Generation) completa desde cero. Su aplicación permitirá a los usuarios subir documentos, almacenarlos de manera segura y buscar a través de ellos utilizando inteligencia artificial para análisis semántico...

  • AI
    FrikInfo

    Cómo crear un programador de publicaciones de redes sociales con inteligencia artificial en Next.js

    Las redes sociales han become una herramienta vital para las personas y las empresas para compartir ideas, promocionar productos y conectarse con su audiencia objetivo. Sin embargo, crear publicaciones regularmente y gestionar horarios en múltiples plataformas puede ser tiempo-consumidor y repetitivo.

  • AI
    FrikInfo

    Cómo priorizar como gerente de producto: marcos de priorización explicados

    La priorización en el manejo de productos no tiene nada que ver con qué métrica es más importante. De todas las funciones que desempeña un gerente de producto, una de las más difíciles es decidir qué trabajar en cuanto próximo. ¿Por qué? Porque todo parece urgente. Los ingenieros...

  • AI
    FrikInfo

    Cómo funciona el anuncio Bluetooth extendido en AOSP

    El anuncio de bajo consumo de energía de Bluetooth siempre ha sido una de esas cosas que los desarrolladores 'usaban hasta que se rompía de manera subtil y dolorosa. Estableces un nombre, lanzas un UUID, quizás agrega algunos datos de fabricante y esperas que todo encaje. Durante años, el silencioso ruego...

  • AI
    FrikInfo

    Acelerar el Entrenamiento de Modelos de Inteligencia Artificial

    El paralelismo de pipeline acelera el entrenamiento de modelos de inteligencia artificial al dividir un modelo masivo en múltiples GPUs y procesar datos como una línea de ensamblaje, garantizando que ninguno de los dispositivos tenga que almacenar el modelo completo en memoria. Este curso enseña a implementar paralelismo de pipeline.

  • AI
    FrikInfo

    Cómo funcionan los patrones de diseño Factory y Abstract Factory en Flutter

    En el desarrollo de software, especialmente en programación orientada a objetos y diseño, la creación de objetos es una tarea común. Y cómo manejes este proceso puede afectar la flexibilidad, escalabilidad y mantenibilidad de tu aplicación. Los patrones de diseño creacionales regulan cómo...

  • AI
    FrikInfo

    Actualizaciones de precios para aplicaciones, compras en aplicación y suscripciones

    La App Store se diseña para hacer que sea fácil vender bienes y servicios digitales globalmente, con apoyo para 43 monedas en 175 tiendas.

  • AI
    Tim De Chant

    Solar Adds Another Win by Adding 475 MW for Microsoft's AI Data Centers

    La empresa recientemente firmó un acuerdo con el proveedor de energía AES para tres proyectos solares en el Medio Oeste. La empresa recientemente firmó un acuerdo con el proveedor de energía AES para tres proyectos solares en el Medio Oeste.

  • AI
    Tim De Chant

    Una startup de inteligencia artificial ayuda a los agricultores de arroz a enfrentar el cambio climá...

    Mitti Labs está trabajando con The Nature Conservancy para expandir el uso de prácticas de cultivo de arroz amigables con el clima en la India. La startup utiliza su inteligencia artificial para verificar reducciones en emisiones de metano.

  • AI
    FrikInfo

    La verdad omitida en el último documento de demanda de Elon Musk

    La verdad omitida en el último documento de demanda de Elon Musk

  • AI
    FrikInfo

    Introducción al modelo de programación GPT-5.2-Codex

    GPT-5.2-Codex es el modelo de programación más avanzado de OpenAI, que ofrece razonamiento a largo plazo, transformaciones de código a gran escala y capacidades de seguridad informática mejoradas.