Skip to main content

Writing A Plugin From Scratch

A morph plugin in this repo can own:

  • manifest metadata
  • package dependencies and imports
  • semantic features
  • lowering features
  • route-specific lowering
  • backend emit
  • runtime family imports
  • tooling commands
  • optional plugin library exports
  • tests

Not every package needs every layer, but if you are starting from zero, you need to know what each layer is for and which file owns it.

The central rule stays the same:

  • feature work belongs in package space
  • if package space cannot express the feature, widen the framework
  • do not push feature ownership into Core as a shortcut

What This Lesson Builds

There is no tiny toy plugin in this repo that exercises every layer at once.

So this lesson does something more useful:

  1. it shows the exact package structure you start with
  2. it uses real manifest fields from the current repo
  3. it uses real semantic, lowering, route, emit, and tooling classes from the current repo
  4. it explains which pieces you copy, which pieces you rename, and which pieces you leave out when your package does not need them

Think of this lesson as a full authoring map built from:

  • Core
  • Tensor
  • Test
  • Build
  • Flow
  • Graphics

Step 1: Create The Package Root

Start with a real package root under morphs/.

morphs/MyPackage/
morph.toml
features/
api/
feature.toml
blocks/
my_form/
block.toml
Syntax.cpp
host/
plugin/

morph.toml must list nested manifests under [include] (for example features/*/feature.toml and, when you own grammar surfaces, blocks/*/block.toml). Without that, your block.toml / feature.toml fragments are invisible to the composed compiler.

Then add only the directories your package actually owns:

  • add blocks/ when you own [forms.*] syntax (statement/expression shapes)—see morphs/Flow/blocks/ or morphs/Access/blocks/ as minimal examples
  • add runtime/ if you own runtime families or native runtime code
  • add backend/ if you emit backend artifacts such as host LLVM values
  • add more feature folders under features/ as your package grows
  • add CLI or host implementation files under host/ when the package owns semantic or lowering behavior

If you already know the package needs route-specific lowering, runtime imports, and backend emit, a more realistic root looks like this:

morphs/MyPackage/
morph.toml
features/
api/
feature.toml
MyFeature/
MyFeature.cpp
routes/
HostLLVM.cpp
GpuVulkan.cpp
backend/
routes/
host.llvm/
Emit.cpp
host/
MyPackageHostMorphs.cpp
runtime/
shared/
my_package_runtime.c
plugin/
MyPackageMorphLibrary.cpp

Do not start in src/.

Do not start in runtime/.

Do not start by patching parser or sema switches in Core.

Start by deciding which package owns the feature, then create the package space for that ownership.


Step 2: Write morph.toml

morph.toml is the package contract. If this file is wrong, the rest of the package is not really in the Morph graph.

Here is a small but real starting pattern assembled from current packages:

[package]
domain = "MyPackage"
owner = "Morph"
abi = 4
library = "bin/morph_morph_my_package"
enabled = true
dependencies = ["Core", "Types"]

[import.hostllvm]
kind = "provider"
id = "provider.core.host_llvm"
from = "Core"

[runtime.family.my_runtime]
description = "My package runtime imports."
variants = ["my_runtime.do_work"]

[runtime.family.my_runtime.symbol.do_work]
llvm_signature = "v:p"
native_symbol = "morph_my_runtime_do_work"

[include]
files = [
"features/*/feature.toml",
"blocks/*/block.toml",
]

[build]
nir_sources = [
"host/MyPackageHostMorphs.cpp",
"features/MyFeature/MyFeature.cpp",
"features/MyFeature/routes/HostLLVM.cpp",
"features/MyFeature/routes/GpuVulkan.cpp"
]
codegen_sources = ["backend/routes/host.llvm/Emit.cpp"]
cli_sources = ["features/tooling/Commands.cpp"]

Every important line here maps to something real in the repo.

The [package] block

This is the baseline identity block used by real packages such as Core, Test, and Build.

  • domain is the package domain id
  • owner identifies who owns the package
  • abi tracks Morph ABI generation
  • library points at the built plugin artifact
  • enabled controls whether the package participates by default
  • dependencies brings other packages into the package graph

For example, Test currently does this:

[package]
domain = "Test"
owner = "Morph"
abi = 4
library = "bin/morph_morph_test"
enabled = true
dependencies = ["Core", "Types", "Flow"]

That dependency list is not decoration. It tells the framework which package surfaces Test expects to compose with.

The [import.*] block

Use imports when your package depends on a provider or exported capability from another package.

Core does this today:

[import.gpu_provider]
kind = "provider"
id = "provider.gpu.vulkan"
from = "GPU"

[import.shader_provider]
kind = "provider"
id = "provider.shader.spirv"
from = "Shader"

This is the right place to express package composition.

Do not hardcode provider ids in unrelated core code and call that "integration."

The [runtime.family.*] block

If your package owns runtime calls, declare them here.

Test is a good real example because it shows both runtime family declaration and LLVM signatures:

[runtime.family.test]
description = "Native test collector runtime imports."
variants = [
"test.scope_begin",
"test.step_begin",
"test.record_failure",
"test.scope_end",
"test.total_failures",
]

[runtime.family.test.symbol.scope_begin]
llvm_signature = "v:ppppl"
native_symbol = "morph_test_scope_begin"

[runtime.family.test.symbol.scope_end]
llvm_signature = "l:"
native_symbol = "morph_test_scope_end"

The important point is not the specific symbols. The important point is ownership:

  • package runtime API lives in package manifest
  • package runtime symbol truth lives in package manifest
  • backend/import users consume that package truth

Do not scatter runtime symbol strings through compiler core files.

The [build] block

This is where package-owned source files become part of the build graph.

Current packages use this to declare:

  • nir_sources
  • sema_sources
  • cli_sources
  • codegen_sources
  • library_sources

For example, Core currently declares:

[build]
nir_sources = [
"host/CoreOptimizerHostMorphs.cpp",
"host/CoreRuntimeBuiltins.cpp",
"features/Input/Input.cpp",
"features/Input/chain/Default.cpp",
"features/Input/chain/Max.cpp",
"features/Input/chain/Min.cpp"
]
cli_sources = ["features/tooling/Commands.cpp"]
codegen_sources = [
"features/Input/backend/routes/host.llvm/Emit.cpp",
"features/Print/backend/routes/host.llvm/Emit.cpp"
]

That is the model to copy:

  • add your package implementation files here
  • do not sneak them into core build files by hand

Step 3: Write features/.../feature.toml

Now declare the actual feature entries.

This is where package-owned method names, operators, routes, providers, and emit routes become visible to the Morph graph.

Example: a chain method

Core declares Input<T>.Min(...) like this:

[core.input.chain.min]
method = "Min"
source = "features/Input/chain/Min.cpp"
component_class = "core.input.chain.min"
param_minValue = ["t"]
returns = "Self"

This means:

  • method = "Min": the source-level chain method name
  • source: the implementation file
  • component_class: the identity that registry/glue will resolve
  • param_minValue: parameter schema
  • returns = "Self": this method continues the receiver shape

Example: a binary feature with route-specific lowering

Tensor declares the semantic operator and then separate route entries:

[tensor.add]
operator = "operator.add"
source = "Tensor/features/Add/Add.cpp"
component_class = "tensor.add.semantic"

[tensor.add.route.hostllvm]
provider_id = "provider.core.host_llvm"
route_id = "host.llvm"
artifact_kind = "host.nir_value"
required_extensions = []
required_runtime_families = []
description = "Lowers tensor.add into Morph-owned host NIR."
source = "Tensor/features/Add/routes/HostLLVM.cpp"
component_class = "tensor.add.route.hostllvm"

[tensor.add.route.gpuvulkan]
provider_id = "provider.gpu.vulkan"
route_id = "gpu.vulkan"
artifact_kind = "host.nir_value"
required_extensions = []
required_runtime_families = []
description = "Lowers tensor.add into Morph-owned gpu.vulkan host NIR."
source = "Tensor/features/Add/routes/GpuVulkan.cpp"
component_class = "tensor.add.route.gpuvulkan"

This is one of the most important Morph patterns:

  • one semantic feature
  • multiple route implementations
  • route selection driven by package metadata

Example: a backend emit route

Tensor also declares backend emit ownership in manifest space:

[tensor.add.route.backendhostllvm]
provider_id = "provider.core.host_llvm"
route_id = "host.llvm"
artifact_kind = "host.llvm_value"
required_extensions = ["llvm.runtime_import.v1"]
required_runtime_families = ["tensor"]
description = "Emits host LLVM IR for tensor.add."
source = "backend/routes/host.llvm/Emit.cpp"
component_class = "tensor.add.backend.hostllvm"

Again, this matters because emit is package-owned.

Do not add one-off backend branches in compiler core just because a package needs a backend path.


Step 4: Implement The Semantic Feature

Once the manifest entry exists, implement the feature class.

Here is the real Input<T>.Min(...) semantic implementation from the repo:

#include "../InputFeaturePlugin.h"

namespace morph::nir::morph {
namespace {

using morph::morph::ChainSemanticFeature;
using morph::morph::ChainSignatureView;
using morph::morph::SemanticFeatureContext;
using morph::morph::FeatureSegmentView;
using core_input::InputFeatureSemanticFact;

class InputMinChainSemantic final : public ChainSemanticFeature {
public:
bool apply(SemanticFeatureContext &ctx,
const morph::morph::FeatureInvocationView &,
const FeatureSegmentView &segment,
const ChainSignatureView &signature,
morph::morph::FeatureSemanticFact &fact,
std::string *outError) override {
InputFeatureSemanticFact *inputFact =
core_input::asInputFeatureSemanticFact(fact);
if (inputFact == nullptr || inputFact->inputType == nullptr) {
if (outError != nullptr) {
*outError =
"Input<T>.Min() received an incompatible featureInvocation fact.";
}
return false;
}
if (!inputFact->inputType->isNumeric()) {
if (outError != nullptr) {
*outError = "Input<T>.Min() is only valid for numeric T.";
}
return false;
}
if (!core_input::expectChainArity(segment, signature, outError)) {
return false;
}

ASTNode *argNode = core_input::singleArgument(segment);
NTypePtr argType = ctx.infer(argNode);
if (argType != nullptr && !argType->isError() && !argType->isUnknown() &&
!argType->isDynamic() && !argType->isNumeric()) {
if (outError != nullptr) {
*outError = "Input<T>.Min() argument must be numeric.";
}
return false;
}

inputFact->minArg = argNode;
return true;
}
};

} // namespace

MORPHLANG_MORPH_DEFINE_CHAIN_SEMANTIC_FACTORY(core_input_chain_min,
InputMinChainSemantic)
} // namespace morph::nir::morph

This is the exact shape you should internalize:

  1. derive from the correct feature base class
  2. validate that the incoming fact is your package's fact type
  3. validate receiver state
  4. validate arity and parameter types
  5. record accepted semantic information back into the fact
  6. export a factory macro

The last line is critical:

MORPHLANG_MORPH_DEFINE_CHAIN_SEMANTIC_FACTORY(core_input_chain_min,
InputMinChainSemantic)

That exported factory is what generated glue and runtime registry logic will eventually resolve.

If you forget the factory export, the manifest entry exists but nothing concrete can be instantiated.


Step 5: Implement Declaration Or Statement Lowering If Your Feature Owns It

Some packages own declarations or statements, not just expressions or chain methods.

Test is a good real example because it lowers test {} blocks into a generated harness.

The implementation below is a real current lowering class:

class TestDeclarationLowering final : public HostDeclarationLowering {
public:
bool matches(const ASTNode &node) const override {
return test_morph::matchesTestExtension(node);
}

bool lower(NIRBuilder &builder, ASTNode *node,
std::string *outError) override {
auto *payload = test_morph::asTestBlockPayload(node);
if (payload == nullptr) {
return true;
}
if (builder.compilationMode() != morph::CompilationMode::Test) {
return true;
}
if (payload->compileExpectation) {
return true;
}

HarnessState &harness = ensureHarness(builder);
const int caseIndex = harness.nextCaseIndex++;
if (!matchesRequestedFilters(*payload, caseIndex)) {
return true;
}
const std::string caseName = "__MorphTestCase_" + std::to_string(caseIndex);
const std::string displayName = testNameOrDefault(*payload, caseIndex);

builder.beginSyntheticFunction(caseName, NType::makeInt());
Instruction *scopeBegin =
builder.createInst(InstKind::Call, NType::makeVoid(), "");
scopeBegin->addOperand(makeStringLiteral(kScopeBeginSymbol));
scopeBegin->addOperand(makeStringLiteral(displayName));
scopeBegin->addOperand(makeStringLiteral(payload->tag));
scopeBegin->addOperand(makeStringLiteral(payload->expectedId));
scopeBegin->addOperand(makeStringLiteral(node->location.file));
scopeBegin->addOperand(makeIntegerLiteral(node->location.line));

if (payload->body != nullptr && payload->body->type == ASTNodeType::Block) {
auto *body = static_cast<BlockNode *>(payload->body.get());
int stepIndex = 0;
for (const auto &statement : body->statements) {
lowerManagedStep(builder, statement.get(), stepIndex++);
}
}

Instruction *scopeEnd =
builder.createInst(InstKind::Call, NType::makeInt(), "");
scopeEnd->addOperand(makeStringLiteral(kScopeEndSymbol));
Instruction *ret =
builder.createInst(InstKind::Ret, NType::makeVoid(), "");
ret->addOperand(scopeEnd);
builder.endSyntheticFunction();

builder.setCurrentFunction(harness.harness);
builder.setCurrentBlock(harness.tailBlock);
Instruction *callCase =
builder.createInst(InstKind::Call, NType::makeInt(), "");
callCase->addOperand(makeStringLiteral(caseName));
Block *nextBlock = harness.harness->createBlock(
builder.nextGeneratedBlockName() + "_test_harness");
Instruction *advance =
builder.createInst(InstKind::Br, NType::makeVoid(), "");
advance->addOperand(new BlockRef(nextBlock));
harness.tailBlock = nextBlock;
builder.setCurrentBlock(nextBlock);
return true;
}
};

MORPHLANG_MORPH_DEFINE_DECLARATION_LOWERING_FACTORY(test_declaration_lowering,
TestDeclarationLowering)

What this should teach you:

  • declaration-level ownership also lives in package space
  • package lowering can create synthetic functions and blocks
  • package lowering can call package runtime families
  • package lowering can be gated by compilation mode
  • the registry export is still package-local

The macro here comes from the host lowering registry surface:

MORPHLANG_MORPH_DEFINE_DECLARATION_LOWERING_FACTORY(test_declaration_lowering,
TestDeclarationLowering)

If your package owns statements, declarations, or other non-expression lowering, this is the family of API you need to use.


Step 6: Add Separate Lowering Routes For Different Providers

One semantic feature does not mean one lowering implementation.

If the same feature must lower differently for host and GPU, declare separate routes and implement separate classes.

This is the real host route for tensor.add:

#include "../../TensorFeaturePlugin.h"

namespace morph::nir::morph {
namespace {

using morph::morph::HostRouteFeature;
using morph::morph::LoweringContext;

class TensorAddHostRoute final : public HostRouteFeature {
public:
bool lower(LoweringContext &ctx, const morph::morph::FeatureFact &fact,
nir::Value **outValue, std::string *outError) override {
return tensor_surface::lowerTensorBinaryFeatureFact(ctx, fact, false, outValue,
outError);
}
};

} // namespace

MORPHLANG_MORPH_DEFINE_HOST_ROUTE_FACTORY(tensor_add_route_hostllvm,
TensorAddHostRoute)

} // namespace morph::nir::morph

And this is the real GPU Vulkan route:

#include "../../TensorFeaturePlugin.h"

namespace morph::nir::morph {
namespace {

using morph::morph::HostRouteFeature;
using morph::morph::LoweringContext;

class TensorAddGpuVulkanRoute final : public HostRouteFeature {
public:
bool lower(LoweringContext &ctx, const morph::morph::FeatureFact &fact,
nir::Value **outValue, std::string *outError) override {
return tensor_surface::lowerTensorBinaryFeatureFact(ctx, fact, true, outValue,
outError);
}
};

} // namespace

MORPHLANG_MORPH_DEFINE_HOST_ROUTE_FACTORY(tensor_add_route_gpuvulkan,
TensorAddGpuVulkanRoute)

} // namespace morph::nir::morph

The two files are almost identical, and that is fine.

The point is not to force everything into one giant if (gpu) ... else ... branch.

The point is to let the package own multiple route implementations cleanly.

That is how "platform-specific lowering" or "provider-specific lowering" should look in Morph:

  • distinct manifest entries
  • distinct route classes
  • distinct provider ids and route ids
  • shared helper code only where it actually helps

Do not push provider routing logic into generic core lowering tables.


Step 7: Add Backend Emit If The Package Owns Backend Output

If your package stops at NIR, you may not need a backend emit file.

If your package owns backend output, it must own its emit logic too.

The Graphics package shows a real host LLVM backend emit file. The code below is a real excerpt from backend/routes/host.llvm/Emit.cpp:

bool emitStandardRuntimeRoute(MorphMorphBackendRouteContext *context,
const MorphMorphBackendOperationView *op,
std::string_view runtimeSymbol,
std::string *outError) {
BackendRouteApi api(*context);
std::vector<MorphMorphBackendValueHandle> arguments;
if (!materializeAllOperands(api, op, &arguments, outError)) {
return false;
}
MorphMorphBackendValueHandle result{};
const bool expectsResult = hasBindableResult(op);
if (!api.callRuntime(graphics_ids::kRuntimeFamily, runtimeSymbol, arguments,
expectsResult ? std::string_view(op->result_name)
: std::string_view{},
expectsResult ? &result : nullptr, outError)) {
return false;
}
return !expectsResult || bindIfNamed(api, op, result, outError);
}

bool emitMaterialCreateRoute(MorphMorphBackendRouteContext *context,
const MorphMorphBackendOperationView *op,
std::string *outError) {
if (op == nullptr || op->operand_count < 1u) {
if (outError != nullptr) {
*outError = "graphics.material.create requires a shader operand";
}
return false;
}

BackendRouteApi api(*context);
MorphMorphBackendValueHandle descriptor{};
if (!api.namedArtifactDescriptor("graphics", shaderName, &descriptor,
outError)) {
return false;
}
...
}

This is what backend emit means in Morph:

  • materialize operands from backend operation views
  • call runtime family symbols through package-owned metadata
  • resolve named artifacts through package/provider descriptors
  • bind emitted values back to named backend results

Do not add ad-hoc backend symbol handling in generic LLVM core code just because one package needs a runtime call.

If a package owns the operation family, it should also own the backend route that emits it.


Step 8: Add Tooling Commands If The Package Owns CLI Behavior

Packages can also own commands.

If your package needs morph something, write a tooling command feature instead of wiring a new command directly into core dispatch.

Build is a clean real example:

#include "morphc/morph/Build.h"
#include "morphc/morph/Tooling.h"

namespace morph::morph {
namespace {

class BuildCommand final : public ToolingCommandFeature {
public:
ToolingCommandDescriptor describe() const override {
return {"build", {"build"}, "Build the current project",
"morph build [--target <platform>]", "build"};
}

int run(const ToolingCommandContext &context,
const ToolingCommandRequest &request,
std::string *outError) override {
return runStageWithResolvedTarget(context, request, BuildPipelineStage::Build,
outError);
}
};

} // namespace

MORPHLANG_MORPH_DEFINE_TOOLING_COMMAND_FACTORY(build_tooling_command, BuildCommand)

} // namespace morph::morph

That one file teaches the whole pattern:

  1. derive from ToolingCommandFeature
  2. return a descriptor from describe()
  3. implement behavior in run(...)
  4. export the factory with MORPHLANG_MORPH_DEFINE_TOOLING_COMMAND_FACTORY(...)

If your new plugin owns a command, this is the path.

Do not patch core command tables just to add one command name.


Step 9: Add A Plugin Library Entry If Your Package Uses One

Some packages in this repo still export a Morph library entry under plugin/.

If your package uses this path, follow the real pattern used by Flow or Test.

For example, Flow exports:

#include "morphc/morph/MorphABI.h"

namespace {

const char *const kFlowLoweringCaps[] = {"domain.flow.lowering.statement"};
const char *const kFlowSemaCaps[] = {"domain.flow.semantic"};

const MorphMorphFeature kComponents[] = {
{{MORPHLANG_MORPH_ABI_VERSION_CURRENT, "nir.flow.statement_lowering", "Flow",
MORPHLANG_MORPH_PHASE_LOWERING,
MORPHLANG_MORPH_ARTIFACT_PROGRAM_AST_SEMANTIC,
MORPHLANG_MORPH_ARTIFACT_NIR, kFlowLoweringCaps, 1u,
"Control-flow statement lowering into generic NIR blocks.",
MORPHLANG_MORPH_FEATURE_ROLE_LOWERING, nullptr, nullptr,
MORPHLANG_MORPH_BACKEND_HOST_UNSPECIFIED,
MORPHLANG_MORPH_BACKEND_ARTIFACT_UNSPECIFIED, nullptr, 0u},
{nullptr, nullptr, nullptr, nullptr, nullptr},
nullptr},
{{MORPHLANG_MORPH_ABI_VERSION_CURRENT, "sema.flow.extension", "Flow",
MORPHLANG_MORPH_PHASE_SEMANTIC, MORPHLANG_MORPH_ARTIFACT_UNSPECIFIED,
MORPHLANG_MORPH_ARTIFACT_UNSPECIFIED, kFlowSemaCaps, 1u,
"Control-flow semantic extension runtime.",
MORPHLANG_MORPH_FEATURE_ROLE_SEMA_EXTENSION, nullptr, nullptr,
MORPHLANG_MORPH_BACKEND_HOST_UNSPECIFIED,
MORPHLANG_MORPH_BACKEND_ARTIFACT_UNSPECIFIED, nullptr, 0u},
{nullptr, nullptr, nullptr, nullptr, nullptr},
nullptr},
};

const MorphMorphLibrary kLibrary = {
MORPHLANG_MORPH_ABI_VERSION_CURRENT,
"morph.morph.flow",
"Flow",
"Morph flow morph library",
sizeof(kComponents) / sizeof(kComponents[0]),
kComponents,
};

} // namespace

MORPHLANG_MORPH_EXPORT const MorphMorphLibrary *morph_morph_get_library(void) {
return &kLibrary;
}

The important rule is still the same:

  • if package metadata lives here, keep it in package space
  • do not mirror it in core tables

Step 10: Put The Right Source Files In The Right Registry Surface

At this point, many new authors make the same mistake:

  • they wrote the C++ file
  • they wrote the manifest entry
  • but they did not connect the file to the right build/registry surface

Use this checklist.

If you wrote a semantic feature

  • add the feature entry in features/.../feature.toml
  • add the C++ source path under the package [build] section if needed by the package build model
  • export the correct factory macro

If you wrote a declaration or expression lowering

  • implement the class under host/ or the package feature tree
  • export the host lowering factory macro
  • make sure the source file is listed under nir_sources

If you wrote a tooling command

  • export MORPHLANG_MORPH_DEFINE_TOOLING_COMMAND_FACTORY(...)
  • put the file in cli_sources

If you wrote backend emit

  • declare the backend route entry in feature.toml
  • add the file under codegen_sources

If you wrote runtime families

  • declare runtime family variants and signatures in morph.toml
  • add runtime code under package runtime ownership
  • do not hardcode runtime symbols elsewhere

Step 11: Test The Package Like A Real Repo Change

Do not stop at "the file compiles in my head."

Add tests where the ownership actually matters:

  • parser tests if syntax shape changes
  • semantic tests if typing or validation changes
  • NIR tests if lowering changes
  • runtime tests if runtime family behavior changes
  • CLI tests if tooling command behavior changes

Then run the repo's real driver:

# from repository root (Windows)
morphc.bat build --test "Nir"
# or filtered unit pass:
morphc.bat test "Semantic"

See the documentation in scripts/README.md for the full command matrix. Legacy scripts/build_tests.ps1 may still exist but morphc is the supported entrypoint.

If your plugin touches docs only, sync the docs tree too:

node website/scripts/sync-docs.mjs

The package is not done until:

  • manifest and implementation agree
  • the correct factory is exported
  • the correct build source list includes the file
  • tests prove the package owns the behavior

Common Mistakes

These are the mistakes that push people back into core-first habits.

Mistake 1: editing Core because it feels faster

If the feature belongs to a package, package space is the first move.

Only widen the framework when the package cannot express the feature with current SDK hooks.

Mistake 2: writing implementation without manifest ownership

If there is no morph.toml or feature.toml declaration, the package is not really declaring ownership.

Mistake 3: putting provider routing in one generic branch

If host, GPU, shader, or provider routes differ, give them separate route entries and separate route classes.

Mistake 4: hardcoding runtime symbols

Runtime family symbol truth belongs in the package manifest.

Mistake 5: forgetting the factory macro

If there is no exported factory, registry discovery has nothing concrete to instantiate.

Mistake 6: adding a new command directly to core

Commands are package-owned tooling features.

Mistake 7: treating backend emit as compiler-core infrastructure

Backend emit for a package-owned semantic family should stay with the package.


Final Checklist

Before you say "the plugin is done", verify this list.

  • morphs/MyPackage/morph.toml exists and declares package identity
  • dependencies and imports are declared in the manifest
  • runtime families and LLVM signatures are declared if the package owns runtime imports
  • features/.../feature.toml declares the feature, route, or backend entries
  • semantic/lowering/tooling/backend classes exist in package space
  • the correct factory macros are exported
  • [build] lists the right sources
  • tests cover the package-owned behavior
  • no feature ownership was pushed into Core as a shortcut

If you can satisfy that list, you are actually writing a Morph plugin instead of a hidden core patch.


Next Steps