Skip to content

Conversation

@thc1006
Copy link

@thc1006 thc1006 commented Dec 15, 2025

Summary

Add support for pulling models in CNCF ModelPack format by converting them to Docker model-spec format on-the-fly.

  • Add modelpack package with type definitions and conversion logic
  • Convert config fields (paramSize -> parameters, createdAt -> created)
  • Convert layer media types (cncf.model.weight -> docker.ai.gguf)
  • Preserve extended metadata in Config.ModelPack extension field

Related Issues

Test Plan

  • Unit tests for config conversion
  • Unit tests for layer media type conversion
  • Unit tests for artifact wrapper

Copilot AI review requested due to automatic review settings December 15, 2025 18:30
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @thc1006, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the model distribution capabilities by introducing native compatibility with the CNCF ModelPack format. It enables the system to seamlessly pull and process models packaged under this standard by performing an automatic, on-the-fly conversion to the internal Docker model-spec format. This ensures that users can leverage a wider range of models without manual intervention, while also preserving rich metadata from the original ModelPack specification.

Highlights

  • CNCF ModelPack Compatibility: Adds support for pulling models packaged in the CNCF ModelPack format, expanding the range of compatible models.
  • On-the-fly Conversion: ModelPack models are transparently converted to the Docker model-spec format during the pull operation, ensuring seamless integration.
  • Config Field Mapping: Automatically maps ModelPack configuration fields, such as 'paramSize' to 'parameters' and 'createdAt' to 'created', to align with the Docker specification.
  • Layer Media Type Conversion: Converts ModelPack specific layer media types (e.g., 'cncf.model.weight') to their corresponding Docker equivalents (e.g., 'docker.ai.gguf' or 'docker.ai.safetensors').
  • Extended Metadata Preservation: Any additional metadata from the ModelPack specification that doesn't have a direct Docker equivalent is preserved in a new 'Config.ModelPack' extension field.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds support for the CNCF ModelPack format by enabling transparent conversion to Docker's model-spec format during model pulls. The implementation introduces a new modelpack package that handles type definitions, configuration conversion, and artifact wrapping to seamlessly integrate ModelPack models into the existing Docker model infrastructure.

Key changes:

  • New modelpack package with complete type definitions matching the CNCF ModelPack specification
  • On-the-fly conversion logic that maps ModelPack config fields to Docker format equivalents
  • Artifact wrapper that presents ModelPack models as Docker format models while preserving extended metadata

Reviewed changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
pkg/distribution/types/config.go Adds ModelPack extension field to store metadata without direct Docker format mappings
pkg/distribution/modelpack/types.go Defines CNCF ModelPack format types including Model, ModelDescriptor, ModelConfig, and ModelCapabilities
pkg/distribution/modelpack/convert.go Implements conversion logic from ModelPack to Docker format, including field mapping and metadata extraction
pkg/distribution/modelpack/convert_test.go Comprehensive unit tests for config conversion, format mapping, and metadata extraction
pkg/distribution/modelpack/artifact.go Wrapper that converts ModelPack artifacts to Docker format on-the-fly, handling config and layer media type conversion
pkg/distribution/modelpack/artifact_test.go Unit tests for artifact wrapper covering conversion, manifest updates, and layer handling
pkg/distribution/internal/progress/reporter.go Adds WriteInfo function for informational progress messages
pkg/distribution/distribution/client.go Integrates ModelPack detection and conversion into the pull workflow with digest updates

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.


// ModelDescriptor defines the general information of a model.
type ModelDescriptor struct {
// CreatedAt is the date and time on which the model was built.
Copy link

Copilot AI Dec 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The JSON tag uses camelCase "createdAt" which is inconsistent with typical Go JSON naming conventions. While this matches the CNCF ModelPack spec, consider documenting this explicitly in a comment if it's an intentional deviation from the project's typical snake_case JSON naming pattern seen in the Docker format (e.g., "context_size").

Suggested change
// CreatedAt is the date and time on which the model was built.
// CreatedAt is the date and time on which the model was built.
// Note: The JSON tag uses camelCase ("createdAt") to match the CNCF ModelPack spec,
// which intentionally deviates from the typical Go snake_case convention (e.g., "created_at").

Copilot uses AI. Check for mistakes.
Comment on lines +228 to +230
if convertedDigest != remoteDigest {
c.log.Infof("Model converted from ModelPack format, new digest: %s", convertedDigest.String())
}
Copy link

Copilot AI Dec 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The digest comparison check occurs after conversion, but the log message is only written when digests differ. Since ModelPack conversion always produces a different digest (due to config changes), consider removing this conditional and always logging the conversion, or restructure to log before conversion is attempted.

Suggested change
if convertedDigest != remoteDigest {
c.log.Infof("Model converted from ModelPack format, new digest: %s", convertedDigest.String())
}
c.log.Infof("Model converted from ModelPack format, new digest: %s", convertedDigest.String())

Copilot uses AI. Check for mistakes.
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces compatibility with the CNCF ModelPack format by converting models to the Docker model-spec format on-the-fly. The changes are well-encapsulated within a new modelpack package and cleanly integrated into the distribution client. The implementation is robust, covering on-the-fly conversion of both configuration and layer media types, and is supported by a comprehensive suite of unit tests. My feedback includes a couple of suggestions to ensure true deep-copying of data structures, which will improve the overall robustness of the conversion logic.

Comment on lines 3 to 16
import (
"bytes"
"encoding/json"
"fmt"
"maps"
"strings"

v1 "github.com/docker/model-runner/pkg/go-containerregistry/pkg/v1"
"github.com/docker/model-runner/pkg/go-containerregistry/pkg/v1/partial"
ggcr "github.com/docker/model-runner/pkg/go-containerregistry/pkg/v1/types"

mdpartial "github.com/docker/model-runner/pkg/distribution/internal/partial"
"github.com/docker/model-runner/pkg/distribution/types"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To support the suggested deep copy of slice fields in the Manifest method, the slices package needs to be imported. This change adds it to the import block, following standard Go import grouping.

Suggested change
import (
"bytes"
"encoding/json"
"fmt"
"maps"
"strings"
v1 "github.com/docker/model-runner/pkg/go-containerregistry/pkg/v1"
"github.com/docker/model-runner/pkg/go-containerregistry/pkg/v1/partial"
ggcr "github.com/docker/model-runner/pkg/go-containerregistry/pkg/v1/types"
mdpartial "github.com/docker/model-runner/pkg/distribution/internal/partial"
"github.com/docker/model-runner/pkg/distribution/types"
)
import (
"bytes"
"encoding/json"
"fmt"
"maps"
"slices"
"strings"
v1 "github.com/docker/model-runner/pkg/go-containerregistry/pkg/v1"
"github.com/docker/model-runner/pkg/go-containerregistry/pkg/v1/partial"
ggcr "github.com/docker/model-runner/pkg/go-containerregistry/pkg/v1/types"
mdpartial "github.com/docker/model-runner/pkg/distribution/internal/partial"
"github.com/docker/model-runner/pkg/distribution/types"
)

Comment on lines 133 to 143
for i, layer := range srcManifest.Layers {
layers[i] = v1.Descriptor{
MediaType: convertLayerMediaType(layer.MediaType, a.convertedConfig.Config.Format),
Size: layer.Size,
Digest: layer.Digest,
URLs: layer.URLs,
Annotations: maps.Clone(layer.Annotations),
Data: layer.Data,
Platform: layer.Platform,
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment on line 131 states "Deep copy layers", but the implementation performs a shallow copy for slice fields (URLs, Data) and the pointer field (Platform). This could lead to unexpected side effects if the source artifact is modified elsewhere. To ensure the converted artifact is fully independent and to align with the comment's intent, a true deep copy of the layer descriptors should be performed.

	for i, layer := range srcManifest.Layers {
		layers[i] = v1.Descriptor{
			MediaType:   convertLayerMediaType(layer.MediaType, a.convertedConfig.Config.Format),
			Size:        layer.Size,
			Digest:      layer.Digest,
			URLs:        slices.Clone(layer.URLs),
			Annotations: maps.Clone(layer.Annotations),
			Data:        slices.Clone(layer.Data),
		}
		if layer.Platform != nil {
			p := *layer.Platform
			p.OSFeatures = slices.Clone(layer.Platform.OSFeatures)
			layers[i].Platform = &p
		}
	}

Add support for pulling models in CNCF ModelPack format by converting
them to Docker model-spec format on-the-fly during pull operations.

- Add modelpack package with type definitions and conversion logic
- Convert config fields (paramSize -> parameters, createdAt -> created)
- Convert layer media types (cncf.model.weight -> docker.ai.gguf)
- Preserve extended metadata in Config.ModelPack extension field
- Add comprehensive test coverage

Closes: docker/model-spec#3
Signed-off-by: thc1006 <[email protected]>
@thc1006 thc1006 force-pushed the feat/modelpack-compatibility branch from bb92e7a to a804ffc Compare December 15, 2025 18:34
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • In convertDiffIDs, invalid digests are silently skipped; consider at least logging or surfacing which entries were dropped so that unexpected or malformed ModelFS data is detectable rather than failing quietly.
  • ConfigFile on convertedArtifact now returns a non-nil error, which is a behavior change vs. many OCI-style artifact types; if any existing code calls ConfigFile() on types.ModelArtifact, it may now error unexpectedly, so it could be safer either to implement a minimal v1.ConfigFile or document/guard against this usage at the call sites.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `convertDiffIDs`, invalid digests are silently skipped; consider at least logging or surfacing which entries were dropped so that unexpected or malformed ModelFS data is detectable rather than failing quietly.
- `ConfigFile` on `convertedArtifact` now returns a non-nil error, which is a behavior change vs. many OCI-style artifact types; if any existing code calls `ConfigFile()` on `types.ModelArtifact`, it may now error unexpectedly, so it could be safer either to implement a minimal `v1.ConfigFile` or document/guard against this usage at the call sites.

## Individual Comments

### Comment 1
<location> `pkg/distribution/modelpack/artifact_test.go:161-170` </location>
<code_context>
+func TestNewConvertedArtifact(t *testing.T) {
</code_context>

<issue_to_address>
**suggestion (testing):** Add coverage for NewConvertedArtifact error path when source.RawConfigFile fails

Currently we only exercise the `invalid source config` error case. Please also cover the case where `source.RawConfigFile()` itself returns an error (e.g., registry/network/IO failure) by using a stub `types.ModelArtifact` whose `RawConfigFile` returns an error, and assert that `NewConvertedArtifact` propagates it with the expected wrapping.

Suggested implementation:

```golang
func TestNewConvertedArtifact(t *testing.T) {
	t.Run("successful conversion", func(t *testing.T) {
		created := time.Date(2025, 1, 15, 10, 30, 0, 0, time.UTC)
		mpConfig := Model{
			Descriptor: ModelDescriptor{
				CreatedAt: &created,
				Family:    "llama",
				Vendor:    "TestVendor",
			},
			Config: ModelConfig{
				Architecture: "transformer",
			},
		}

		// ... existing successful conversion test setup and assertions ...
	})

	t.Run("raw config file error is propagated", func(t *testing.T) {
		// Arrange: use a ModelArtifact whose RawConfigFile returns an error
		source := &rawConfigErrorArtifact{}

		// Act
		_, err := NewConvertedArtifact(context.Background(), source)

		// Assert
		if err == nil {
			t.Fatalf("expected error from NewConvertedArtifact, got nil")
		}

		// We expect NewConvertedArtifact to wrap the underlying RawConfigFile error.
		// Use errors.Is if a sentinel error is available; otherwise fall back to
		// checking for a stable substring in the error message.
		var rawCfgErr *rawConfigError
		if !errors.As(err, &rawCfgErr) && !strings.Contains(err.Error(), "raw config") {
			t.Fatalf("expected error wrapping RawConfigFile failure, got: %v", err)
		}

```

1. Define a stub artifact type that implements `types.ModelArtifact` (or the local `ModelArtifact` interface used by `NewConvertedArtifact`) and whose `RawConfigFile` method returns a wrapped error type so we can assert error propagation. For example, near the other test doubles (e.g., `mockLayer`), add:
   ```go
   type rawConfigError struct {
   	err error
   }

   func (e *rawConfigError) Error() string {
   	return fmt.Sprintf("raw config error: %v", e.err)
   }

   type rawConfigErrorArtifact struct {
   	// Add fields or embed an existing mock artifact if needed to satisfy
   	// the full ModelArtifact interface.
   }

   func (a *rawConfigErrorArtifact) RawConfigFile() (io.ReadCloser, error) {
   	return nil, &rawConfigError{err: errors.New("simulated RawConfigFile failure")}
   }

   // Implement the remaining methods required by the ModelArtifact interface.
   // For methods not relevant to this test, you can return zero values and nil errors.
   ```
   If there is already a mock artifact used in other tests (e.g., `mockArtifact`), prefer embedding it:
   ```go
   type rawConfigErrorArtifact struct {
   	*mockArtifact
   }

   func (a *rawConfigErrorArtifact) RawConfigFile() (io.ReadCloser, error) {
   	return nil, &rawConfigError{err: errors.New("simulated RawConfigFile failure")}
   }
   ```
2. Ensure `NewConvertedArtifact` is invoked with the correct signature in the new subtest. The example above assumes `NewConvertedArtifact(context.Context, types.ModelArtifact)`, but if your function requires additional parameters (e.g. registry client, options), mirror the call from the "successful conversion" subtest and only swap the `source` argument to use `&rawConfigErrorArtifact{...}`.
3. Import any additional packages used in the new test and helper types:
   ```go
   import (
       "context"
       "errors"
       "fmt"
       "io"
       "strings"
       // existing imports...
   )
   ```
4. If `NewConvertedArtifact` wraps the `RawConfigFile` error with a specific message or sentinel error (for example, `fmt.Errorf("reading raw config: %w", err)`), tighten the assertion accordingly:
   - If there is a sentinel error, replace the `errors.As` / `strings.Contains` logic with `errors.Is(err, <sentinelErr>)`.
   - If the message prefix is stable, assert on that exact prefix instead of the generic `"raw config"` substring.
</issue_to_address>

### Comment 2
<location> `pkg/distribution/modelpack/artifact_test.go:318-31` </location>
<code_context>
+	}
+}
+
+func TestConvertedArtifact_LayerMediaTypeConversion(t *testing.T) {
+	tests := []struct {
+		name          string
+		format        string
+		expectedMedia ggcr.MediaType
+	}{
+		{
+			name:          "GGUF format converts to Docker GGUF",
+			format:        "gguf",
+			expectedMedia: types.MediaTypeGGUF,
+		},
+		{
+			name:          "Safetensors format converts to Docker Safetensors",
+			format:        "safetensors",
+			expectedMedia: types.MediaTypeSafetensors,
+		},
+	}
+
+	for _, tt := range tests {
</code_context>

<issue_to_address>
**suggestion (testing):** Extend layer media type conversion tests to cover non-ModelPack and non-weight ModelPack layers

This currently only exercises GGUF and Safetensors weight layers. To fully cover `convertLayerMediaType`, please add table-driven cases where:

- The layer media type has no CNCF/ModelPack prefix and is passed through unchanged.
- The media type is a ModelPack type that is not `*.weight.*` (e.g., `...doc...`, `...code...`) and is also preserved.

Construct `convertedArtifact` entries with these media types and assert the resulting manifest layer media types are unchanged.

Suggested implementation:

```golang
func TestConvertedArtifact_LayerMediaTypeConversion(t *testing.T) {
	tests := []struct {
		name          string
		format        string
		sourceMedia   ggcr.MediaType
		expectedMedia ggcr.MediaType
	}{
		{
			name:          "GGUF ModelPack weight converts to Docker GGUF",
			format:        "gguf",
			sourceMedia:   ggcr.MediaType("application/vnd.cncf.model.weight.v1.raw"),
			expectedMedia: types.MediaTypeGGUF,
		},
		{
			name:          "Safetensors ModelPack weight converts to Docker Safetensors",
			format:        "safetensors",
			sourceMedia:   ggcr.MediaType("application/vnd.cncf.model.weight.v1.safetensors"),
			expectedMedia: types.MediaTypeSafetensors,
		},
		{
			name:          "Non-ModelPack media type is preserved",
			format:        "gguf", // format should not affect non-ModelPack media types
			sourceMedia:   ggcr.MediaType("application/vnd.oci.image.layer.v1.tar+gzip"),
			expectedMedia: ggcr.MediaType("application/vnd.oci.image.layer.v1.tar+gzip"),
		},
		{
			name:          "Non-weight ModelPack media type is preserved",
			format:        "gguf", // only *.weight.* should be converted
			sourceMedia:   ggcr.MediaType("application/vnd.cncf.model.doc.v1+json"),
			expectedMedia: ggcr.MediaType("application/vnd.cncf.model.doc.v1+json"),
		},
	}

	for _, tt := range tests {
		t.Run(tt.name, func(t *testing.T) {
			mpConfig := Model{
				Config: ModelConfig{
					Format: tt.format,
				},
				ModelFS: ModelFS{
					Type:    "layers",
					DiffIDs: []digest.Digest{"sha256:abc123def456abc123def456abc123def456abc123def456abc123def456abc1"},
				},
			}

```

To fully implement the test additions and satisfy the review comment, you should also:

1. In the remainder of `TestConvertedArtifact_LayerMediaTypeConversion`, ensure that the layer being passed into or constructed for `convertLayerMediaType` (or the equivalent conversion path) uses `tt.sourceMedia` as its media type. For example, if you currently have something like:
   ```go
   layer, _ := static.NewLayer([]byte("test"), ggcr.MediaType("application/vnd.cncf.model.weight.v1.raw"))
   ```
   change it to:
   ```go
   layer, _ := static.NewLayer([]byte("test"), tt.sourceMedia)
   ```
   or pass `tt.sourceMedia` wherever the original ModelPack layer media type is specified.

2. When asserting the result on the produced manifest or converted layer, use the table's `expectedMedia` for all cases, e.g.:
   ```go
   if got := manifest.Layers[0].MediaType; got != tt.expectedMedia {
       t.Errorf("Layer MediaType = %v, want %v", got, tt.expectedMedia)
   }
   ```
   This will:
   - Confirm that weight layers with ModelPack media types are converted to Docker GGUF/Safetensors.
   - Confirm that non-ModelPack media types are passed through unchanged.
   - Confirm that non-weight ModelPack media types (e.g. `doc`, `code`) are also preserved.

3. If `convertLayerMediaType` is not directly visible in this test and you construct a `convertedArtifact` instead, make sure the construction path for that artifact uses `tt.sourceMedia` for the layer media type (e.g., when creating a `v1.Layer` or when stubbing the source manifest) so the test is actually exercising the conversion logic for the new cases.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@ericcurtin
Copy link
Contributor

What do you think about the AI review bot comments? What do you think about converting vs Docker pulling this format as is without conversion, so it can push/pull/run without conversion? Pros/cons?

@thc1006
Copy link
Author

thc1006 commented Dec 19, 2025

@ericcurtin

Thank you very much for your thoughtful review and for raising this important architectural question. I truly appreciate you taking the time to guide me on this.


Regarding AI Review Bot Comments

I've reviewed all the suggestions carefully. They are helpful improvements (deep copy, test coverage, etc.). I'm happy to address them based on your preference.


Regarding Converting vs Native Support

Thank you for pointing this out — I hadn't fully considered the implications of my initial approach.

After reflecting on your question, I realize that my current implementation has limitations:

  • The digest changes after conversion
  • Models cannot be pushed back in their original ModelPack format
  • This breaks the complete push/pull/run cycle you mentioned

Your suggestion of "pulling the format as-is without conversion" makes a lot of sense for maintaining artifact integrity and full interoperability.

I've analyzed the codebase further and believe this can be achieved by:

  • Storing the original ModelPack format (preserving the digest)
  • Converting at read time instead of pull time
  • This would require changes to only a few files, with no modifications needed for the scheduler or inference backends

My Question

Would you prefer that I refactor the implementation to support native ModelPack storage as described above?

I want to make sure I'm aligned with the project's direction before proceeding. Please let me know your thoughts — I'm happy to adjust my approach based on your guidance.

Thank you again for your patience and valuable feedback!

This adds native support for CNCF ModelPack format through read-time
conversion instead of pull-time modification:

- Add IsModelPackConfig() using JSON parsing for robust format detection
- Add MapLayerMediaType() for layer media type mapping
- Add ConvertToDockerConfig() for config format conversion
- Integrate auto-detection in partial.ConfigFile()
- Simplify checkAndConvertCompat() to validate without modification
- Remove artifact.go/-test.go (-666 lines) - no longer needed

The read-time approach preserves original digests, which is crucial for
content-addressed storage and signature verification.

Signed-off-by: Kuan-Hsien Liu <[email protected]>
@thc1006
Copy link
Author

thc1006 commented Dec 19, 2025

@ericcurtin

TL;DR: Refactored to read-time conversion as discussed — ModelPack is now stored as-is, preserving the original digest.


What Changed

Before After
Convert at pull time Convert at read time
Digest changes Digest preserved ✅
artifact.go wrapper (+666 lines) Deleted (-666 lines) ✅

Implementation

  • IsModelPackConfig() — detects format via JSON structure parsing
  • partial.ConfigFile() — auto-converts when reading config
  • MapLayerMediaType() — maps layer types when querying layers
  • Stored artifact remains unchanged (push/pull/run cycle works)

This addresses the concerns you raised. Ready for review when you have a chance. Thank you!

@ericcurtin
Copy link
Contributor

Need to remove non-English comments

@ericcurtin
Copy link
Contributor

ericcurtin commented Dec 22, 2025

We should treat this as a new version of the model format in DMR codebase and if it eventually matures, start to push/pull in that format. If the implementation matures, we could eventually switch to this as the primary format, given no negative consequences of the change (for example the existing DMR format treats layers as files to avoid duplication on disk, haven't checked if ModelPack does similar).

We should not convert between the two types.

No conversions, push and pull as the existing format. Push and pull as the CNCF format should be supported.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants