{"id":4184,"date":"2025-10-17T11:03:25","date_gmt":"2025-10-17T05:33:25","guid":{"rendered":"https:\/\/navveenbalani.dev\/?p=4184"},"modified":"2025-10-17T11:06:14","modified_gmt":"2025-10-17T05:36:14","slug":"why-llm-coding-copilots-are-failing-to-deliver-real-value","status":"publish","type":"post","link":"https:\/\/navveenbalani.dev\/index.php\/articles\/artificial-intelligence\/generative-ai\/why-llm-coding-copilots-are-failing-to-deliver-real-value\/","title":{"rendered":"Why LLM Coding Copilots Are Failing to Deliver Real Value"},"content":{"rendered":"\n<p>There\u2019s a bold narrative sweeping through the software industry: <em>AI coding assistants will redefine engineering.<\/em> We\u2019ve all seen the headlines \u2014 \u201cAll code will soon be AI-generated,\u201d \u201cDevelopers 10\u00d7 more productive,\u201d \u201cAI writing most of our applications.\u201d<\/p>\n\n\n\n<p>The promise is seductive. The reality is far more complex.<\/p>\n\n\n\n<p>After building and deploying multiple <strong>end-to-end production systems<\/strong> using tools like <strong>GitHub Copilot, Claude, Gemini, and OpenAI models<\/strong>, one conclusion stands out clearly:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>These tools can generate code \u2014 but they cannot engineer software.<\/p><\/blockquote>\n\n\n\n<p>They deliver impressive demos and quick wins for isolated snippets, yet struggle the moment they step into real, evolving systems. What follows is not a theoretical analysis, but observations from actual implementation \u2014 where productivity meets production.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3>The Grand Promise<\/h3>\n\n\n\n<p>In every major launch, we see AI copilots positioned as game-changers. They can generate boilerplate code, fix bugs, create unit tests, and even build small apps from prompts. The idea of a \u201cdeveloper multiplier\u201d \u2014 where one engineer plus AI equals the output of five \u2014 has become a central theme in the AI transformation story.<\/p>\n\n\n\n<p>And to be fair, there\u2019s value in the promise. For repetitive coding, documentation, or scaffolding, copilots can genuinely accelerate workflows. They reduce cognitive load for simple, pattern-based tasks. But that\u2019s where the value plateaus.<\/p>\n\n\n\n<p>Because software engineering is not about lines of code \u2014 it\u2019s about <em>decisions.<\/em> Architecture, system design, trade-offs, scalability, resilience, and security \u2014 these are not patterns to be predicted; they are choices made with intent. That\u2019s where LLM copilots begin to fail.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3>The Reality Check<\/h3>\n\n\n\n<h3>1. Architectural Incoherence<\/h3>\n\n\n\n<p>LLMs can generate functional code fragments, but they lack architectural context. In one of my test builds, the AI used three different state-management patterns within the same feature \u2014 not by choice, but by confusion. The output \u201clooked right\u201d locally but created an unmaintainable structure when scaled.<\/p>\n\n\n\n<p>A human engineer ensures consistency across modules. The AI, on the other hand, simply mimics whichever pattern appears most statistically probable based on its training data.<\/p>\n\n\n\n<h3>2. No System-Level Thinking<\/h3>\n\n\n\n<p>Copilots are brilliant at the <em>micro<\/em> level \u2014 single functions or classes \u2014 but blind at the <em>macro<\/em> level. They don\u2019t maintain a mental model of the system. They can\u2019t reason across files or understand interdependencies. In one case, the AI hardcoded configuration and pricing logic directly into multiple functions, ignoring the concept of centralized configuration altogether. It \u201csolved\u201d the local task while breaking scalability and maintainability for the entire application.<\/p>\n\n\n\n<h3>3. Error Handling: The Forgotten Path<\/h3>\n\n\n\n<p>AI-generated code consistently misses the \u201cunhappy path.\u201d In testing a payment flow, Copilot produced near-perfect happy-path logic \u2014 but no retry, no transaction rollback, and no error visibility for partial failures. Exceptions were silently caught and ignored. A production-grade engineer anticipates what happens when <em>things go wrong<\/em>. LLMs simply don\u2019t \u2014 unless explicitly told.<\/p>\n\n\n\n<h3>4. Hallucinated Logic<\/h3>\n\n\n\n<p>Sometimes, the AI invents logic that seems valid but doesn\u2019t exist. During integration testing, one generated function, appeared out of nowhere. It duplicated another function already in the codebase, slightly modified. This wasn\u2019t human error; it was the model losing context mid-generation. Such hallucinations create debugging chaos later, because the logic seems \u201cplausible,\u201d but it\u2019s not actually wired into the program flow.<\/p>\n\n\n\n<h3>5. Blind Spots for Non-Functional Requirements<\/h3>\n\n\n\n<p>Performance, security, and scalability don\u2019t feature in an LLM\u2019s predictive scope unless prompted. One AI-generated snippet created a hardcoded retry loop with fixed delays \u2014 perfect for small workloads, catastrophic at scale. Another skipped token expiration checks entirely. AI doesn\u2019t \u201cforget\u201d these things \u2014 it never <em>knew<\/em> them. They\u2019re not patterns in code; they\u2019re principles of engineering judgment.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3>The Hidden Trap: Crowdsourced Thinking<\/h3>\n\n\n\n<p>There\u2019s a deeper, subtler problem emerging \u2014 <strong>LLM copilots make us think in a crowdsourced way.<\/strong> They generate what the majority of the internet has done before \u2014 the median of prior knowledge, not the frontier of new ideas.<\/p>\n\n\n\n<p>Ask them to build something with new APIs, unfamiliar frameworks, or original architectures, and they stumble. The AI\u2019s reasoning is rooted in yesterday\u2019s patterns, not tomorrow\u2019s possibilities.<\/p>\n\n\n\n<p>This \u201caveraged intelligence\u201d becomes dangerous for innovation. It recommends complex solutions when simpler ones exist. It follows trends, not insight. For example, when a single API call could solve a use case, the AI might propose a three-layer abstraction pattern because it has seen that in open-source repositories. In other words \u2014 it crowdsources your thinking without you realizing it.<\/p>\n\n\n\n<p>This subtle influence can push organizations away from <em>new thinking<\/em> and toward <em>conventional pattern mimicry<\/em>. For an industry built on innovation, that\u2019s a quiet regression.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3>The Missing Holistic Approach<\/h3>\n\n\n\n<p>Even when copilots appear to \u201ccomplete\u201d an app, they miss the essentials that experienced developers never overlook \u2014<\/p>\n\n\n\n<ul><li>version upgrades and compatibility,<\/li><li>build processes and deployment strategies,<\/li><li>logging, monitoring, and performance tuning,<\/li><li>dependency management, and<\/li><li>security baselines.<\/li><\/ul>\n\n\n\n<p>These gaps are invisible until the project reaches production. Unless you\u2019ve personally designed, built, deployed, and maintained complex systems, it\u2019s easy to assume the AI has it covered \u2014 it doesn\u2019t.<\/p>\n\n\n\n<p>Copilots operate with <strong>narrow focus<\/strong>, not holistic awareness. They can code a feature, but they don\u2019t think about the ecosystem the feature lives in. That distinction separates a working prototype from a sustainable system.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3>The Benchmark Mirage<\/h3>\n\n\n\n<p>Benchmarks fuel the illusion of progress. Tests like <strong>HumanEval<\/strong> or <strong>SWE-Bench<\/strong> showcase impressive accuracy for self-contained coding problems \u2014 but that\u2019s not real-world software development. These benchmarks test for <em>correctness of output<\/em>, not <em>soundness of design<\/em>.<\/p>\n\n\n\n<p>A Copilot or LLM might pass a functional test while introducing technical debt that explodes months later. Demos show best-case results, not the debugging, rework, and refactoring that follow.<\/p>\n\n\n\n<p>In one real-world scenario, an AI-generated analytics module spammed events continuously, inflating cloud bills by hundreds of dollars. Another assistant, when tested on a live .NET project, repeatedly generated unbuildable pull requests. The tools performed perfectly in the demo \u2014 and poorly in deployment.<\/p>\n\n\n\n<p>Benchmarks measure <em>speed<\/em>. Engineering measures <em>sustainability<\/em>.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3>The Large Context Trap<\/h3>\n\n\n\n<p>As LLMs evolve, their context windows have expanded dramatically \u2014 from a few thousand tokens to millions. On paper, this promises \u201csystem-level\u201d understanding: the ability to reason across entire codebases, architectures, and documentation. In practice, it introduces a new illusion of capability.<\/p>\n\n\n\n<p>Having more context is not the same as having more understanding. Even with vast input windows, LLMs still treat information statistically \u2014 not structurally. They can <em>see<\/em> the whole project, but they don\u2019t <em>interpret<\/em> its intent. The model does not reason about architectural relationships, performance implications, or security dependencies; it merely predicts patterns that appear probable across a larger span of text.<\/p>\n\n\n\n<p>In one real-world experiment, feeding an entire service repository into a long-context model produced elegant summaries and detailed-looking refactors \u2014 yet the proposed changes broke key integration contracts. The model recognized syntax and flow, but not system behavior.<\/p>\n\n\n\n<p>The danger of the Large Context Trap is subtle. The illusion of \u201ccomplete awareness\u201d often convinces teams that the AI now understands their system holistically \u2014 when, in reality, it\u2019s only extending its statistical horizon. Without reasoning, memory, or intent, scale alone cannot replace architectural thinking.<\/p>\n\n\n\n<p>True system intelligence requires structured awareness \u2014 not longer context windows, but the ability to <em>model relationships<\/em>, <em>reason over constraints<\/em>, and <em>preserve design integrity<\/em> across decisions. Until copilots evolve to that level, they will continue to produce code that looks coherent yet fails in operation.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3>Why \u201cAI Will Replace Engineers\u201d Is the Wrong Question<\/h3>\n\n\n\n<p>Saying that copilots will replace engineers is like saying Excel replaces financial analysts. It doesn\u2019t. It scales their ability to work with data \u2014 but the thinking, reasoning, and judgment still belong to the human.<\/p>\n\n\n\n<p>LLMs can write code. They can\u2019t reason about <em>why<\/em> the code should exist, or <em>how<\/em> it fits into a larger system.<\/p>\n\n\n\n<p>That\u2019s why the \u201cAI replacing engineers\u201d narrative is misleading. It confuses automation with understanding. The copilots are assistants \u2014 not autopilots. And the best engineering teams know this distinction defines success or failure in real deployments.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3>\ud83d\udd27 The Road Ahead<\/h3>\n\n\n\n<p>If LLM copilots are to become meaningful contributors to software engineering, they need a <strong>fundamental redesign<\/strong> \u2014 not just larger models or faster inference speeds.<\/p>\n\n\n\n<p>The current generation operates within a narrow window: they assist in generating code, but they don\u2019t participate in engineering. They lack the systemic awareness that defines real software creation \u2014 architecture, integration, performance, deployment, security, and lifecycle management.<\/p>\n\n\n\n<p>Engineering isn\u2019t linear. It\u2019s an interconnected process where one decision affects many others \u2014 from dependency chains and version upgrades to runtime performance, user experience, and security posture. Today\u2019s copilots don\u2019t see those connections; they work line by line, not layer by layer.<\/p>\n\n\n\n<p>They need to evolve from <em>code predictors<\/em> into <em>contextual collaborators<\/em> \u2014 systems that understand project structure, dependencies, testing, and delivery pipelines holistically. This requires moving beyond language models into <strong>engineering models<\/strong> that reason about software as a living ecosystem.<\/p>\n\n\n\n<p>At the same time, the industry must <strong>re-examine<\/strong> its direction. The rush to train ever-larger models and flood the market with AI coding tools has become a competition of scale rather than substance. Billions of dollars are being spent <strong>chasing leaderboard<\/strong> positions \u2014 while the actual developer experience and production readiness remain secondary.<\/p>\n\n\n\n<p>What\u2019s needed now is not more size, but more sense. We need copilots that respect the realities of engineering \u2014 grounded in correctness, maintainability, and performance \u2014 and that integrate seamlessly into how software is truly built and maintained.<\/p>\n\n\n\n<p>The goal isn\u2019t to automate developers out of the loop. It\u2019s to <strong>elevate<\/strong> them \u2014 providing insight, structure, and efficiency while preserving human judgment. Only when copilots align with the principles of disciplined software engineering will they deliver real, measurable value \u2014 in production, at scale, and over time.<\/p>\n\n\n\n<p>The next generation of copilots must blend <strong>reasoning, responsibility, and restraint<\/strong>. They should not just predict the next line of code, but understand <em>why<\/em> that line matters. They must combine deep contextual learning with lightweight, sustainable compute \u2014 an evolution from \u201cLarge Language Models\u201d to <strong>Lean Engineering Models<\/strong> that prioritize cost, performance, and environmental impact alongside capability.<\/p>\n\n\n\n<p>That\u2019s the real challenge \u2014 and the real opportunity \u2014 in the road ahead for AI and software engineering.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>There\u2019s a bold narrative sweeping through the software industry: AI coding assistants will redefine engineering. We\u2019ve all seen the headlines \u2014 \u201cAll code will soon be AI-generated,\u201d \u201cDevelopers 10\u00d7 more productive,\u201d \u201cAI writing most of our applications.\u201d The promise is seductive. The reality is far more complex. After building and deploying multiple end-to-end production systems [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3988,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[394,3,267,10,324],"tags":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v16.0.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Why LLM Coding Copilots Are Failing to Deliver Real Value - Current and Future Technology Trends by Navveen Balani<\/title>\n<meta name=\"description\" content=\"Why LLM Coding Copilots Are Failing to Deliver Real Value - Generative AI\" \/>\n<link rel=\"canonical\" href=\"https:\/\/navveenbalani.dev\/index.php\/articles\/artificial-intelligence\/generative-ai\/why-llm-coding-copilots-are-failing-to-deliver-real-value\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Why LLM Coding Copilots Are Failing to Deliver Real Value - Current and Future Technology Trends by Navveen Balani\" \/>\n<meta property=\"og:description\" content=\"Why LLM Coding Copilots Are Failing to Deliver Real Value - Generative AI\" \/>\n<meta property=\"og:url\" content=\"https:\/\/navveenbalani.dev\/index.php\/articles\/artificial-intelligence\/generative-ai\/why-llm-coding-copilots-are-failing-to-deliver-real-value\/\" \/>\n<meta property=\"og:site_name\" content=\"Current and Future Technology Trends by Navveen Balani\" \/>\n<meta property=\"article:published_time\" content=\"2025-10-17T05:33:25+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-10-17T05:36:14+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/navveenbalani.dev\/wp-content\/uploads\/2023\/07\/Gen-AI.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\">\n\t<meta name=\"twitter:data1\" content=\"8 minutes\">\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebSite\",\"@id\":\"https:\/\/navveenbalani.dev\/#website\",\"url\":\"https:\/\/navveenbalani.dev\/\",\"name\":\"Current and Future Technology Trends by Navveen Balani\",\"description\":\"Current and Future Technology Trends by Navveen Balani\",\"publisher\":{\"@id\":\"https:\/\/navveenbalani.dev\/#\/schema\/person\/51f7ab14b20611d95e3c7fd4ea0950bf\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":\"https:\/\/navveenbalani.dev\/?s={search_term_string}\",\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/navveenbalani.dev\/index.php\/articles\/artificial-intelligence\/generative-ai\/why-llm-coding-copilots-are-failing-to-deliver-real-value\/#primaryimage\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/navveenbalani.dev\/wp-content\/uploads\/2023\/07\/Gen-AI.jpg\",\"width\":1024,\"height\":1024},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/navveenbalani.dev\/index.php\/articles\/artificial-intelligence\/generative-ai\/why-llm-coding-copilots-are-failing-to-deliver-real-value\/#webpage\",\"url\":\"https:\/\/navveenbalani.dev\/index.php\/articles\/artificial-intelligence\/generative-ai\/why-llm-coding-copilots-are-failing-to-deliver-real-value\/\",\"name\":\"Why LLM Coding Copilots Are Failing to Deliver Real Value - Current and Future Technology Trends by Navveen Balani\",\"isPartOf\":{\"@id\":\"https:\/\/navveenbalani.dev\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/navveenbalani.dev\/index.php\/articles\/artificial-intelligence\/generative-ai\/why-llm-coding-copilots-are-failing-to-deliver-real-value\/#primaryimage\"},\"datePublished\":\"2025-10-17T05:33:25+00:00\",\"dateModified\":\"2025-10-17T05:36:14+00:00\",\"description\":\"Why LLM Coding Copilots Are Failing to Deliver Real Value - Generative AI\",\"breadcrumb\":{\"@id\":\"https:\/\/navveenbalani.dev\/index.php\/articles\/artificial-intelligence\/generative-ai\/why-llm-coding-copilots-are-failing-to-deliver-real-value\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/navveenbalani.dev\/index.php\/articles\/artificial-intelligence\/generative-ai\/why-llm-coding-copilots-are-failing-to-deliver-real-value\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/navveenbalani.dev\/index.php\/articles\/artificial-intelligence\/generative-ai\/why-llm-coding-copilots-are-failing-to-deliver-real-value\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"item\":{\"@type\":\"WebPage\",\"@id\":\"https:\/\/navveenbalani.dev\/\",\"url\":\"https:\/\/navveenbalani.dev\/\",\"name\":\"Home\"}},{\"@type\":\"ListItem\",\"position\":2,\"item\":{\"@type\":\"WebPage\",\"@id\":\"https:\/\/navveenbalani.dev\/index.php\/articles\/artificial-intelligence\/generative-ai\/why-llm-coding-copilots-are-failing-to-deliver-real-value\/\",\"url\":\"https:\/\/navveenbalani.dev\/index.php\/articles\/artificial-intelligence\/generative-ai\/why-llm-coding-copilots-are-failing-to-deliver-real-value\/\",\"name\":\"Why LLM Coding Copilots Are Failing to Deliver Real Value\"}}]},{\"@type\":\"Article\",\"@id\":\"https:\/\/navveenbalani.dev\/index.php\/articles\/artificial-intelligence\/generative-ai\/why-llm-coding-copilots-are-failing-to-deliver-real-value\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/navveenbalani.dev\/index.php\/articles\/artificial-intelligence\/generative-ai\/why-llm-coding-copilots-are-failing-to-deliver-real-value\/#webpage\"},\"author\":{\"@id\":\"https:\/\/navveenbalani.dev\/#\/schema\/person\/51f7ab14b20611d95e3c7fd4ea0950bf\"},\"headline\":\"Why LLM Coding Copilots Are Failing to Deliver Real Value\",\"datePublished\":\"2025-10-17T05:33:25+00:00\",\"dateModified\":\"2025-10-17T05:36:14+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/navveenbalani.dev\/index.php\/articles\/artificial-intelligence\/generative-ai\/why-llm-coding-copilots-are-failing-to-deliver-real-value\/#webpage\"},\"publisher\":{\"@id\":\"https:\/\/navveenbalani.dev\/#\/schema\/person\/51f7ab14b20611d95e3c7fd4ea0950bf\"},\"image\":{\"@id\":\"https:\/\/navveenbalani.dev\/index.php\/articles\/artificial-intelligence\/generative-ai\/why-llm-coding-copilots-are-failing-to-deliver-real-value\/#primaryimage\"},\"articleSection\":\"Agentic AI,Articles,Artificial Intelligence,Featured,Generative AI\",\"inLanguage\":\"en-US\"},{\"@type\":[\"Person\",\"Organization\"],\"@id\":\"https:\/\/navveenbalani.dev\/#\/schema\/person\/51f7ab14b20611d95e3c7fd4ea0950bf\",\"name\":\"Navveen\",\"image\":{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/navveenbalani.dev\/#personlogo\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/navveenbalani.dev\/wp-content\/uploads\/2019\/07\/navveen_balani.jpeg\",\"width\":200,\"height\":200,\"caption\":\"Navveen\"},\"logo\":{\"@id\":\"https:\/\/navveenbalani.dev\/#personlogo\"},\"sameAs\":[\"http:\/\/naveenbalani.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","_links":{"self":[{"href":"https:\/\/navveenbalani.dev\/index.php\/wp-json\/wp\/v2\/posts\/4184"}],"collection":[{"href":"https:\/\/navveenbalani.dev\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/navveenbalani.dev\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/navveenbalani.dev\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/navveenbalani.dev\/index.php\/wp-json\/wp\/v2\/comments?post=4184"}],"version-history":[{"count":1,"href":"https:\/\/navveenbalani.dev\/index.php\/wp-json\/wp\/v2\/posts\/4184\/revisions"}],"predecessor-version":[{"id":4185,"href":"https:\/\/navveenbalani.dev\/index.php\/wp-json\/wp\/v2\/posts\/4184\/revisions\/4185"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/navveenbalani.dev\/index.php\/wp-json\/wp\/v2\/media\/3988"}],"wp:attachment":[{"href":"https:\/\/navveenbalani.dev\/index.php\/wp-json\/wp\/v2\/media?parent=4184"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/navveenbalani.dev\/index.php\/wp-json\/wp\/v2\/categories?post=4184"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/navveenbalani.dev\/index.php\/wp-json\/wp\/v2\/tags?post=4184"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}