Closed
Description
Bug Description
An error was thrown when nvidia-modelopt
was not installed, albeit it was not used at all.
Traceback (most recent call last):
File "/home/holywu/test.py", line 11, in <module>
trt_model = torch_tensorrt.dynamo.compile(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/holywu/torch_2.8.dev/lib/python3.12/site-packages/torch_tensorrt/dynamo/_compiler.py", line 697, in compile
gm = post_lowering(gm, settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/holywu/torch_2.8.dev/lib/python3.12/site-packages/torch_tensorrt/dynamo/lowering/passes/_aten_lowering_pass.py", line 102, in post_lowering
gm = ATEN_POST_LOWERING_PASSES(gm, settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/holywu/torch_2.8.dev/lib/python3.12/site-packages/torch_tensorrt/dynamo/lowering/passes/pass_manager.py", line 56, in __call__
out = _pass(out, settings)
^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/home/holywu/torch_2.8.dev/lib/python3.12/site-packages/torch_tensorrt/dynamo/lowering/passes/constant_folding.py", line 34, in constant_fold
cf.run()
File "/home/holywu/torch_2.8.dev/lib/python3.12/site-packages/torch/_inductor/constant_folding.py", line 280, in run
return super().run(initial_env=env)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/holywu/torch_2.8.dev/lib/python3.12/site-packages/torch/fx/interpreter.py", line 171, in run
self.env[node] = self.run_node(node)
^^^^^^^^^^^^^^^^^^^
File "/home/holywu/torch_2.8.dev/lib/python3.12/site-packages/torch/_inductor/constant_folding.py", line 252, in run_node
if self.is_impure(node):
^^^^^^^^^^^^^^^^^^^^
File "/home/holywu/torch_2.8.dev/lib/python3.12/site-packages/torch_tensorrt/dynamo/lowering/passes/constant_folding.py", line 104, in is_impure
if node.target in (torch.ops.tensorrt.quantize_op.default,):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/holywu/torch_2.8.dev/lib/python3.12/site-packages/torch/_ops.py", line 1317, in __getattr__
raise AttributeError(
AttributeError: '_OpNamespace' 'tensorrt' object has no attribute 'quantize_op'
To Reproduce
import torch
import torch_tensorrt
import torchvision.models as models
with torch.inference_mode():
model = models.resnet18(weights=models.ResNet18_Weights.IMAGENET1K_V1).eval().cuda()
inputs = (torch.randn(1, 3, 224, 224, device="cuda"),)
exported_program = torch.export.export(model, inputs)
trt_model = torch_tensorrt.dynamo.compile(
exported_program,
inputs,
min_block_size=1,
)
Environment
- Torch-TensorRT Version (e.g. 1.0.0): 2.8.0.dev20250607+cu128
- PyTorch Version (e.g. 1.0): 2.8.0.dev20250607+cu128
- CPU Architecture: x64
- OS (e.g., Linux): Ubuntu 24.04 LTS
- How you installed PyTorch (
conda
,pip
,libtorch
, source): pip - Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version: 3.12.3
- CUDA version: 12.8
- GPU models and configuration: RTX 4060 Ti
- Any other relevant information:
Additional context
Probably introduced by #3543