versions.h 6.5 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133
  1. #pragma once
  2. #include <cstdint>
  3. namespace caffe2 {
  4. namespace serialize {
  5. constexpr uint64_t kMinSupportedFileFormatVersion = 0x1L;
  6. constexpr uint64_t kMaxSupportedFileFormatVersion = 0xAL;
  7. // Versions (i.e. why was the version number bumped?)
  8. // Note [Dynamic Versions and torch.jit.save vs. torch.save]
  9. //
  10. // Our versioning scheme has a "produced file format version" which
  11. // describes how an archive is to be read. The version written in an archive
  12. // is at least this current produced file format version, but may be greater
  13. // if it includes certain symbols. We refer to these conditional versions
  14. // as "dynamic," since they are identified at runtime.
  15. //
  16. // Dynamic versioning is useful when an operator's semantics are updated.
  17. // When using torch.jit.save we want those semantics to be preserved. If
  18. // we bumped the produced file format version on every change, however,
  19. // then older versions of PyTorch couldn't read even simple archives, like
  20. // a single tensor, from newer versions of PyTorch. Instead, we
  21. // assign dynamic versions to these changes that override the
  22. // produced file format version as needed. That is, when the semantics
  23. // of torch.div changed it was assigned dynamic version 4, and when
  24. // torch.jit.saving modules that use torch.div those archives also have
  25. // (at least) version 4. This prevents earlier versions of PyTorch
  26. // from accidentally performing the wrong kind of division. Modules
  27. // that don't use torch.div or other operators with dynamic versions
  28. // can write the produced file format version, and these programs will
  29. // run as expected on earlier versions of PyTorch.
  30. //
  31. // While torch.jit.save attempts to preserve operator semantics,
  32. // torch.save does not. torch.save is analogous to pickling Python, so
  33. // a function that uses torch.div will have different behavior if torch.saved
  34. // and torch.loaded across PyTorch versions. From a technical perspective,
  35. // torch.save ignores dynamic versioning.
  36. // 1. Initial version
  37. // 2. Removed op_version_set version numbers
  38. // 3. Added type tags to pickle serialization of container types
  39. // 4. (Dynamic) Stopped integer division using torch.div
  40. // (a versioned symbol preserves the historic behavior of versions 1--3)
  41. // 5. (Dynamic) Stops torch.full inferring a floating point dtype
  42. // when given bool or integer fill values.
  43. // 6. Write version string to `./data/version` instead of `version`.
  44. // [12/15/2021]
  45. // kProducedFileFormatVersion is set to 7 from 3 due to a different
  46. // interpretation of what file format version is.
  47. // Whenever there is new upgrader introduced,
  48. // this number should be bumped.
  49. // The reasons that version is bumped in the past:
  50. // 1. aten::div is changed at version 4
  51. // 2. aten::full is changed at version 5
  52. // 3. torch.package uses version 6
  53. // 4. Introduce new upgrader design and set the version number to 7
  54. // mark this change
  55. // --------------------------------------------------
  56. // We describe new operator version bump reasons here:
  57. // 1) [01/24/2022]
  58. // We bump the version number to 8 to update aten::linspace
  59. // and aten::linspace.out to error out when steps is not
  60. // provided. (see: https://github.com/pytorch/pytorch/issues/55951)
  61. // 2) [01/30/2022]
  62. // Bump the version number to 9 to update aten::logspace and
  63. // and aten::logspace.out to error out when steps is not
  64. // provided. (see: https://github.com/pytorch/pytorch/issues/55951)
  65. // 3) [02/11/2022]
  66. // Bump the version number to 10 to update aten::gelu and
  67. // and aten::gelu.out to support the new approximate kwarg.
  68. // (see: https://github.com/pytorch/pytorch/pull/61439)
  69. constexpr uint64_t kProducedFileFormatVersion = 0xAL;
  70. // Absolute minimum version we will write packages. This
  71. // means that every package from now on will always be
  72. // greater than this number.
  73. constexpr uint64_t kMinProducedFileFormatVersion = 0x3L;
  74. // The version we write when the archive contains bytecode.
  75. // It must be higher or eq to kProducedFileFormatVersion.
  76. // Because torchscript changes is likely introduce bytecode change.
  77. // If kProducedFileFormatVersion is increased, kProducedBytecodeVersion
  78. // should be increased too. The relationship is:
  79. // kMaxSupportedFileFormatVersion >= (most likely ==) kProducedBytecodeVersion
  80. // >= kProducedFileFormatVersion
  81. // If a format change is forward compatible (still readable by older
  82. // executables), we will not increment the version number, to minimize the
  83. // risk of breaking existing clients. TODO: A better way would be to allow
  84. // the caller that creates a model to specify a maximum version that its
  85. // clients can accept.
  86. // Versions:
  87. // 0x1L: Initial version
  88. // 0x2L: (Comment missing)
  89. // 0x3L: (Comment missing)
  90. // 0x4L: (update) Added schema to function tuple. Forward-compatible change.
  91. // 0x5L: (update) Update bytecode is sharing constant tensor files from
  92. // torchscript, and only serialize extra tensors that are not in the
  93. // torchscript constant table. Also update tensor storage schema adapting to
  94. // the unify format, the root key of tensor storage is updated from {index} to
  95. // {the_pointer_value_the_tensor.storage}, for example:
  96. // `140245072983168.storage` Forward-compatibility change.
  97. // 0x6L: Implicit opereator versioning using number of specified argument.
  98. // Refer to the summary of https://github.com/pytorch/pytorch/pull/56845 for
  99. // details.
  100. // 0x7L: Enable support for operators with default arguments plus out
  101. // arguments. Refer. See https://github.com/pytorch/pytorch/pull/63651 for
  102. // details.
  103. // 0x8L: Emit promoted operators as instructions. See
  104. // https://github.com/pytorch/pytorch/pull/71662 for details.
  105. // 0x9L: Change serialization format from pickle to format This version is to
  106. // serve migration. v8 pickle and v9 flatbuffer are the same. Refer to the
  107. // summary of https://github.com/pytorch/pytorch/pull/75201 for more details.
  108. constexpr uint64_t kProducedBytecodeVersion = 0x8L;
  109. // static_assert(
  110. // kProducedBytecodeVersion >= kProducedFileFormatVersion,
  111. // "kProducedBytecodeVersion must be higher or equal to
  112. // kProducedFileFormatVersion.");
  113. // Introduce kMinSupportedBytecodeVersion and kMaxSupportedBytecodeVersion
  114. // for limited backward/forward compatibility support of bytecode. If
  115. // kMinSupportedBytecodeVersion <= model_version <= kMaxSupportedBytecodeVersion
  116. // (in loader), we should support this model_version. For example, we provide a
  117. // wrapper to handle an updated operator.
  118. constexpr uint64_t kMinSupportedBytecodeVersion = 0x4L;
  119. constexpr uint64_t kMaxSupportedBytecodeVersion = 0x9L;
  120. } // namespace serialize
  121. } // namespace caffe2