diff options
author | wren romano <2998727+wrengr@users.noreply.github.com> | 2023-05-17 13:09:53 -0700 |
---|---|---|
committer | wren romano <2998727+wrengr@users.noreply.github.com> | 2023-05-17 14:24:09 -0700 |
commit | a0615d020a02e252196383439e2c8143c6525e05 (patch) | |
tree | aa308ef0e4c62d7dba3450f0eb4f8f1dffc0f57c /mlir/test/Dialect/SparseTensor/sparse_reshape.mlir | |
parent | 4dc205f016e3dd2eb1182886a77676f24e39e329 (diff) | |
download | llvm-a0615d020a02e252196383439e2c8143c6525e05.tar.gz |
[mlir][sparse] Renaming the STEA field `dimLevelType` to `lvlTypes`
This commit is part of the migration of towards the new STEA syntax/design. In particular, this commit includes the following changes:
* Renaming compiler-internal functions/methods:
* `SparseTensorEncodingAttr::{getDimLevelType => getLvlTypes}`
* `Merger::{getDimLevelType => getLvlType}` (for consistency)
* `sparse_tensor::{getDimLevelType => buildLevelType}` (to help reduce confusion vs actual getter methods)
* Renaming external facets to match:
* the STEA parser and printer
* the C and Python bindings
* PyTACO
However, the actual renaming of the `DimLevelType` itself (along with all the "dlt" names) will be handled in a separate commit.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D150330
Diffstat (limited to 'mlir/test/Dialect/SparseTensor/sparse_reshape.mlir')
-rw-r--r-- | mlir/test/Dialect/SparseTensor/sparse_reshape.mlir | 12 |
1 files changed, 6 insertions, 6 deletions
diff --git a/mlir/test/Dialect/SparseTensor/sparse_reshape.mlir b/mlir/test/Dialect/SparseTensor/sparse_reshape.mlir index 49eee201fc32..704a2b2bc64c 100644 --- a/mlir/test/Dialect/SparseTensor/sparse_reshape.mlir +++ b/mlir/test/Dialect/SparseTensor/sparse_reshape.mlir @@ -3,8 +3,8 @@ // RUN: mlir-opt %s --post-sparsification-rewrite="enable-runtime-library=false enable-convert=false" \ // RUN: --cse --canonicalize | FileCheck %s --check-prefix=CHECK-RWT -#SparseVector = #sparse_tensor.encoding<{ dimLevelType = [ "compressed" ] }> -#SparseMatrix = #sparse_tensor.encoding<{ dimLevelType = [ "compressed", "compressed" ] }> +#SparseVector = #sparse_tensor.encoding<{ lvlTypes = [ "compressed" ] }> +#SparseMatrix = #sparse_tensor.encoding<{ lvlTypes = [ "compressed", "compressed" ] }> // // roundtrip: @@ -62,7 +62,7 @@ // CHECK-RWT: } // CHECK-RWT: %[[NT1:.*]] = sparse_tensor.load %[[RET]] hasInserts // CHECK-RWT-NOT: sparse_tensor.convert -// CHECK-RWT: return %[[NT1]] : tensor<10x10xf64, #sparse_tensor.encoding<{ dimLevelType = [ "compressed", "compressed" ] }>> +// CHECK-RWT: return %[[NT1]] : tensor<10x10xf64, #sparse_tensor.encoding<{ lvlTypes = [ "compressed", "compressed" ] }>> // func.func @sparse_expand(%arg0: tensor<100xf64, #SparseVector>) -> tensor<10x10xf64, #SparseMatrix> { %0 = tensor.expand_shape %arg0 [[0, 1]] : @@ -135,7 +135,7 @@ func.func @sparse_expand(%arg0: tensor<100xf64, #SparseVector>) -> tensor<10x10x // CHECK-RWT: } // CHECK-RWT: %[[NT1:.*]] = sparse_tensor.load %[[RET]] hasInserts // CHECK-RWT-NOT: sparse_tensor.convert -// CHECK-RWT: return %[[NT1]] : tensor<100xf64, #sparse_tensor.encoding<{ dimLevelType = [ "compressed" ] }>> +// CHECK-RWT: return %[[NT1]] : tensor<100xf64, #sparse_tensor.encoding<{ lvlTypes = [ "compressed" ] }>> // func.func @sparse_collapse(%arg0: tensor<10x10xf64, #SparseMatrix>) -> tensor<100xf64, #SparseVector> { %0 = tensor.collapse_shape %arg0 [[0, 1]] : @@ -210,7 +210,7 @@ func.func @sparse_collapse(%arg0: tensor<10x10xf64, #SparseMatrix>) -> tensor<10 // CHECK-RWT: } // CHECK-RWT: %[[NT1:.*]] = sparse_tensor.load %[[RET]] hasInserts // CHECK-RWT-NOT: sparse_tensor.convert -// CHECK-RWT: return %[[NT1]] : tensor<?x10xf64, #sparse_tensor.encoding<{ dimLevelType = [ "compressed", "compressed" ] }>> +// CHECK-RWT: return %[[NT1]] : tensor<?x10xf64, #sparse_tensor.encoding<{ lvlTypes = [ "compressed", "compressed" ] }>> // func.func @dynamic_sparse_expand(%arg0: tensor<?xf64, #SparseVector>) -> tensor<?x10xf64, #SparseMatrix> { %0 = tensor.expand_shape %arg0 [[0, 1]] : @@ -292,7 +292,7 @@ func.func @dynamic_sparse_expand(%arg0: tensor<?xf64, #SparseVector>) -> tensor< // CHECK-RWT: } // CHECK-RWT: %[[NT1:.*]] = sparse_tensor.load %[[RET]] hasInserts // CHECK-RWT-NOT: sparse_tensor.convert -// CHECK-RWT: return %[[NT1]] : tensor<?xf64, #sparse_tensor.encoding<{ dimLevelType = [ "compressed" ] }>> +// CHECK-RWT: return %[[NT1]] : tensor<?xf64, #sparse_tensor.encoding<{ lvlTypes = [ "compressed" ] }>> // func.func @dynamic_sparse_collapse(%arg0: tensor<10x?xf64, #SparseMatrix>) -> tensor<?xf64, #SparseVector> { %0 = tensor.collapse_shape %arg0 [[0, 1]] : |