<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>yag.xyz</title>
    <link>https://yag.xyz/en/</link>
    <description>Recent content on yag.xyz</description>
    <generator>Hugo</generator>
    <language>en</language>
    <copyright>© 2024, Yuki Okuda</copyright>
    <lastBuildDate>Tue, 24 Mar 2026 00:00:00 +0900</lastBuildDate>
    <atom:link href="https://yag.xyz/en/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Temporal Land Cover Change Detection Using AlphaEarth Satellite Embeddings</title>
      <link>https://yag.xyz/en/post/alphaearth-embedding-change-detection/</link>
      <pubDate>Tue, 24 Mar 2026 00:00:00 +0900</pubDate>
      <guid>https://yag.xyz/en/post/alphaearth-embedding-change-detection/</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was automatically translated from Japanese by AI.&lt;/em&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;overview&#34;&gt;Overview&lt;/h2&gt;&#xA;&lt;p&gt;In the previous article, I explored structure search using approximate nearest neighbor search with AlphaEarth Foundations Satellite Embeddings (AEF).&lt;/p&gt;&#xA;&lt;a href=&#34;https://yag.xyz/en/post/alphaearth-embedding-structure-search/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34; class=&#34;bookmark-card&#34;&gt;&#xA;    &lt;div class=&#34;bookmark-card-content&#34;&gt;&#xA;      &lt;div class=&#34;bookmark-card-title&#34;&gt;Finding Similar Locations with AlphaEarth Satellite Embeddings: How Well Can They Capture Structural Features?&lt;/div&gt;&#xA;      &lt;div class=&#34;bookmark-card-desc&#34;&gt;Evaluating structure detection via approximate nearest neighbor search using Google DeepMind AlphaEarth Satellite Embeddings.&lt;/div&gt;&#xA;      &lt;div class=&#34;bookmark-card-url&#34;&gt;&#xA;        &lt;img class=&#34;bookmark-card-favicon&#34; src=&#34;https://www.google.com/s2/favicons?sz=32&amp;amp;domain=yag.xyz&#34; alt=&#34;&#34; width=&#34;16&#34; height=&#34;16&#34; loading=&#34;lazy&#34; /&gt;&#xA;        &lt;span&gt;https://yag.xyz/en/post/alphaearth-embedding-structure-search/&lt;/span&gt;&#xA;      &lt;/div&gt;&#xA;    &lt;/div&gt;&#xA;  &lt;/a&gt;&#xA;&lt;p&gt;Satellite Embeddings are generated by a model trained on Sentinel-2 and Landsat optical imagery as inputs, with land cover classification data such as the USDA Cropland Data Layer as training targets. These Satellite Embeddings have been updated annually since 2017. This means that by comparing embeddings of the same location across different years, we can quantitatively assess how much the land cover has changed. Locations where large-scale land preparation from construction, farmland conversion, or new solar power installations have occurred should show changes in their year-over-year embeddings.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Delegating Proxmox Virtualization Infrastructure Management to Claude Code</title>
      <link>https://yag.xyz/en/post/claude-code-proxmox-management/</link>
      <pubDate>Sun, 22 Mar 2026 12:00:00 +0900</pubDate>
      <guid>https://yag.xyz/en/post/claude-code-proxmox-management/</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was automatically translated from Japanese by AI.&lt;/em&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;my-custom-pc-and-server-management&#34;&gt;My Custom PC and Server Management&lt;/h2&gt;&#xA;&lt;p&gt;I run Proxmox as a virtualization platform on my custom-built PC, running multiple VMs for different purposes. I have Windows installed for occasional PC gaming, and an Ubuntu VM with GPU passthrough for data competitions like Kaggle — it&amp;rsquo;s become a fairly complex setup.&lt;/p&gt;&#xA;&lt;p&gt;However, managing this kind of virtualized environment is honestly not my strong suit, coming from a software-layer background. While I can handle the straightforward Proxmox use cases without issues, I&amp;rsquo;d stumble whenever I tried anything slightly complex, and hardware-related configurations were always a struggle.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Searching Satellite Images with Text — A Vision Language Model Approach</title>
      <link>https://yag.xyz/en/post/vlm-satellite-image-text-search/</link>
      <pubDate>Wed, 18 Mar 2026 10:00:00 +0900</pubDate>
      <guid>https://yag.xyz/en/post/vlm-satellite-image-text-search/</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was automatically translated from Japanese by AI.&lt;/em&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;overview&#34;&gt;Overview&lt;/h2&gt;&#xA;&lt;p&gt;In &lt;a href=&#34;https://yag.xyz/post/alphaearth-embedding-structure-search/&#34;&gt;the previous article&lt;/a&gt;, I tried a Query-by-Example search using AlphaEarth Satellite Embeddings to find &amp;ldquo;similar structures.&amp;rdquo; Since the approach involved selecting a rectangle on a map and using approximate nearest neighbor search (ANN) to find similar locations, it was not possible to directly express intentions like &amp;ldquo;find airports&amp;rdquo; or &amp;ldquo;find golf courses.&amp;rdquo;&lt;/p&gt;&#xA;&lt;a href=&#34;https://yag.xyz/post/alphaearth-embedding-structure-search/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34; class=&#34;bookmark-card&#34;&gt;&#xA;    &lt;div class=&#34;bookmark-card-content&#34;&gt;&#xA;      &lt;div class=&#34;bookmark-card-title&#34;&gt;AlphaEarth Satellite Embeddingsで似た場所を探す：構造物の特徴をどこまで捉えられるか&lt;/div&gt;&#xA;      &lt;div class=&#34;bookmark-card-desc&#34;&gt;Google DeepMindのAlphaEarth Satellite Embeddingsを用いた近似最近傍探索による構造物検出の検証。&lt;/div&gt;&#xA;      &lt;div class=&#34;bookmark-card-url&#34;&gt;&#xA;        &lt;img class=&#34;bookmark-card-favicon&#34; src=&#34;https://www.google.com/s2/favicons?sz=32&amp;amp;domain=yag.xyz&#34; alt=&#34;&#34; width=&#34;16&#34; height=&#34;16&#34; loading=&#34;lazy&#34; /&gt;&#xA;        &lt;span&gt;https://yag.xyz/post/alphaearth-embedding-structure-search/&lt;/span&gt;&#xA;      &lt;/div&gt;&#xA;    &lt;/div&gt;&#xA;  &lt;/a&gt;&#xA;&lt;p&gt;This time, I&amp;rsquo;ll try building a system that searches satellite images directly using natural language text queries. Using RemoteCLIP, a VLM (Vision-Language Model) fine-tuned for remote sensing imagery, I can search for matching satellite image tiles simply by entering text like &lt;code&gt;&amp;quot;airport&amp;quot;&lt;/code&gt; or &lt;code&gt;&amp;quot;golf course&amp;quot;&lt;/code&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Verifying the &#34;Chinese Satellites Pass Over Japan Every 10 Minutes&#34; Report with Public Data</title>
      <link>https://yag.xyz/en/post/yaogan-tracker-verification/</link>
      <pubDate>Mon, 16 Mar 2026 13:00:00 +0900</pubDate>
      <guid>https://yag.xyz/en/post/yaogan-tracker-verification/</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was automatically translated from Japanese by AI.&lt;/em&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;overview&#34;&gt;Overview&lt;/h2&gt;&#xA;&lt;p&gt;On March 14, 2026, the Yomiuri Shimbun published an article titled &lt;strong&gt;&amp;ldquo;Chinese Satellites Pass Over Japan Every 10 Minutes, &amp;lsquo;Monitoring&amp;rsquo; SDF and US Military Bases&amp;hellip; Yomiuri Analyzes &amp;lsquo;Yaogan&amp;rsquo; Orbits&amp;rdquo;&lt;/strong&gt;. According to the report, approximately 80 of roughly 160 Chinese reconnaissance satellites in the &amp;ldquo;Yaogan&amp;rdquo; series are actively passing over Japan approximately every 10 minutes, with as many as 60 high-frequency passes per day near the Yokosuka Naval Base.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Finding Similar Locations with AlphaEarth Satellite Embeddings: How Well Can They Capture Structural Features?</title>
      <link>https://yag.xyz/en/post/alphaearth-embedding-structure-search/</link>
      <pubDate>Wed, 11 Mar 2026 00:00:00 +0900</pubDate>
      <guid>https://yag.xyz/en/post/alphaearth-embedding-structure-search/</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was automatically translated from Japanese by AI.&lt;/em&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;overview&#34;&gt;Overview&lt;/h2&gt;&#xA;&lt;p&gt;Google DeepMind has released AlphaEarth Foundations Satellite Embeddings (AEF), a foundation model for satellite imagery. It represents every piece of land on Earth as a 64-dimensional vector (embedding) at 10m resolution. Annual data has been freely available on Google Earth Engine since 2017, and the 2025 dataset was published on 2026/3/11.&lt;/p&gt;&#xA;&lt;blockquote class=&#34;twitter-tweet&#34;&gt;&#xA;  &lt;a href=&#34;https://twitter.com/googleearth/status/2031024842498023718&#34;&gt;&lt;/a&gt;&#xA;&lt;/blockquote&gt;&#xA;&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&#xA;&#xA;&lt;p&gt;The &lt;a href=&#34;https://arxiv.org/abs/2507.22291&#34;&gt;paper&lt;/a&gt; published in 2025 evaluates AEF across 15 benchmarks, including land cover classification (US and Europe), crop mapping (Canada, Ethiopia, and sub-Saharan Africa), tree species distribution estimation, oil palm plantation detection, evapotranspiration estimation, and land cover change detection over time. It reports performance that broadly surpasses existing methods.&lt;/p&gt;</description>
    </item>
    <item>
      <title>My Independent Analysis of the Satellite Imagery Tasks from the DEEP DIVE Crowdfunding</title>
      <link>https://yag.xyz/en/post/deepdive-satellite-imagery/</link>
      <pubDate>Mon, 23 Feb 2026 12:53:32 +0900</pubDate>
      <guid>https://yag.xyz/en/post/deepdive-satellite-imagery/</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was automatically translated from Japanese by AI.&lt;/em&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;overview&#34;&gt;Overview&lt;/h2&gt;&#xA;&lt;p&gt;I&amp;rsquo;ve been following the YouTube channel &amp;ldquo;&lt;a href=&#34;https://www.youtube.com/@deepdivecast-p8u&#34;&gt;DEEP DIVE Cast&lt;/a&gt;&amp;rdquo; from an organization I first supported through their &lt;a href=&#34;https://camp-fire.jp/projects/821287/&#34;&gt;initial crowdfunding campaign&lt;/a&gt;. Recently, they posted a video introducing the challenges they want to solve alongside a call for a &lt;a href=&#34;https://camp-fire.jp/projects/914341/&#34;&gt;new crowdfunding campaign&lt;/a&gt;.&lt;/p&gt;&#xA;&lt;iframe width=&#34;560&#34; height=&#34;315&#34; src=&#34;https://www.youtube.com/embed/iy3bhgXvbLE?si=pvBZg2A8xV43_vPI&#34; title=&#34;YouTube video player&#34; frameborder=&#34;0&#34; allow=&#34;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&#34; referrerpolicy=&#34;strict-origin-when-cross-origin&#34; allowfullscreen&gt;&lt;/iframe&gt;&#xA;&lt;p&gt;In short, it was about applying AI to satellite imagery. The task turned out to be quite fascinating as a real-world problem with many intertwined elements, so I decided to think through how I would approach it.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Implementing Claude Code Plan Mode in Your Own AI Agent</title>
      <link>https://yag.xyz/en/post/ai-agent-plan-mode-example/</link>
      <pubDate>Mon, 16 Feb 2026 13:31:12 +0900</pubDate>
      <guid>https://yag.xyz/en/post/ai-agent-plan-mode-example/</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was automatically translated from Japanese by AI.&lt;/em&gt;&lt;/p&gt;&#xA;&lt;p&gt;A commonly cited best practice for using Claude Code effectively is the importance of starting with Plan Mode to create a plan first. Rather than diving straight into writing code, organizing what needs to be done before proceeding to implementation helps reduce rework and produce more accurate results.&lt;/p&gt;&#xA;&lt;p&gt;I wanted to incorporate this Plan Mode mechanism into my own AI Agent, so I recreated it using the &lt;a href=&#34;https://platform.claude.com/docs/en/agent-sdk/overview&#34;&gt;Claude Agent SDK&lt;/a&gt;. In this article, I&amp;rsquo;ll explain what Claude Code&amp;rsquo;s Plan Mode does internally and introduce an implementation that reproduces it in a custom AI Agent.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Centralized Claude Code Usage Stats Across Multiple Environments with OpenTelemetry</title>
      <link>https://yag.xyz/en/post/claude-code-otel/</link>
      <pubDate>Wed, 04 Feb 2026 09:07:59 +0900</pubDate>
      <guid>https://yag.xyz/en/post/claude-code-otel/</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was automatically translated from Japanese by AI.&lt;/em&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;code&gt;ccusage&lt;/code&gt; is a well-known tool for visualizing Claude Code usage statistics, but when you use Claude Code across multiple environments such as remote servers, it can only aggregate stats for each environment individually. In my case, I work across my laptop, GPU servers for data competitions like Kaggle, and various development environments built on Proxmox. I needed a way to view Claude Code usage statistics across all of them.&lt;/p&gt;</description>
    </item>
    <item>
      <title>GPU Experiment Job Management with pueue</title>
      <link>https://yag.xyz/en/post/pueue-job-management/</link>
      <pubDate>Tue, 27 Jan 2026 09:48:37 +0900</pubDate>
      <guid>https://yag.xyz/en/post/pueue-job-management/</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was automatically translated from Japanese by AI.&lt;/em&gt;&lt;/p&gt;&#xA;&lt;p&gt;In data competitions like Kaggle and atmaCup, efficiently running GPU-based training and inference experiments requires keeping the GPU busy at all times. Manually submitting experiments one by one inevitably leads to idle time. By queuing up multiple experiments in advance so that the next one starts automatically when the previous one finishes, you can maximize GPU utilization.&lt;/p&gt;&#xA;&lt;p&gt;In this post, I&amp;rsquo;ll explain how to manage GPU allocation in a local environment using a job management tool called pueue. Note that this focuses on GPU training on a single server and does not cover situations where multiple instances can be flexibly used in cloud environments or on Colab.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Setting Up a Data Competition Environment in the Agentic Coding Era</title>
      <link>https://yag.xyz/en/post/data_competition_setup/</link>
      <pubDate>Mon, 26 Jan 2026 10:28:44 +0900</pubDate>
      <guid>https://yag.xyz/en/post/data_competition_setup/</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was automatically translated from Japanese by AI.&lt;/em&gt;&lt;/p&gt;&#xA;&lt;p&gt;I recently participated in a data competition (&lt;a href=&#34;https://www.guruguru.science/competitions/31/&#34;&gt;atmacup#23&lt;/a&gt;), which prompted me to build a competition environment suited for the Agentic Coding era. While my competition results were underwhelming, I personally felt the potential of Agentic Coding and also identified some clear challenges.&lt;/p&gt;&#xA;&lt;p&gt;In this post, I&amp;rsquo;ll share the experimentation environment I built for this competition.&lt;/p&gt;&#xA;&lt;h2 id=&#34;philosophy-of-collaborating-with-agents&#34;&gt;Philosophy of Collaborating with Agents&lt;/h2&gt;&#xA;&lt;p&gt;When delegating the bulk of analysis and coding to an Agent, I aimed for the following three goals:&lt;/p&gt;</description>
    </item>
    <item>
      <title>About</title>
      <link>https://yag.xyz/en/about/</link>
      <pubDate>Thu, 28 Feb 2019 00:00:00 +0000</pubDate>
      <guid>https://yag.xyz/en/about/</guid>
      <description>&lt;h1 id=&#34;yuki-okuda&#34;&gt;Yuki Okuda&lt;/h1&gt;&#xA;&lt;h2 id=&#34;bio&#34;&gt;Bio&lt;/h2&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Ubie, Inc. — Software Engineer (Machine Learning) (2020/07 - 2025/09)&lt;/li&gt;&#xA;&lt;li&gt;Sansan, Inc. — DSOC Researcher (2018/01 - 2020/06)&lt;/li&gt;&#xA;&lt;li&gt;Recruit Technologies Co., Ltd. (2015/04 - 2017/12)&lt;/li&gt;&#xA;&lt;li&gt;Nara Institute of Science and Technology — M.S. in Information Science (2013/04 - 2015/03)&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;resume&#34;&gt;Resume&lt;/h2&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://yag.xyz/resume_ja.pdf&#34;&gt;Resume (Japanese)&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://yag.xyz/resume_en.pdf&#34;&gt;Resume (English)&lt;/a&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;blogoutput&#34;&gt;Blog/Output&lt;/h2&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Blog&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://yag-ays.github.io/&#34;&gt;https://yag-ays.github.io/&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;http://wolfin.hatenablog.com/&#34;&gt;http://wolfin.hatenablog.com/&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;http://yagays.github.io/&#34;&gt;http://yagays.github.io/&lt;/a&gt; (2012/05 - 2015/03, no longer updated)&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;Website&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://zenn.dev/yag_ays&#34;&gt;https://zenn.dev/yag_ays&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://qiita.com/yagays&#34;&gt;https://qiita.com/yagays&lt;/a&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ul&gt;</description>
    </item>
  </channel>
</rss>
