repo_id
stringlengths
21
96
file_path
stringlengths
31
155
content
stringlengths
1
92.9M
__index_level_0__
int64
0
0
rapidsai_public_repos
rapidsai_public_repos/rapids-triton/README.md
<!-- Copyright (c) 2021, NVIDIA CORPORATION. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) # The RAPIDS-Triton Library This project is designed to make it easy to integrate any C++-based algorithm into the NVIDIA Triton Inference Server. Originally developed to assist with the integration of RAPIDS algorithms, this library can be used by anyone to quickly get up and running with a custom backend for Triton. ## Background ### Triton The [NVIDIA Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inference-server) offers a complete open-source solution for deployment of machine learning models from a wide variety of ML frameworks (PyTorch, Tensorflow, ONNX, XGBoost, etc.) on both CPU and GPU hardware. It allows you to maximize inference performance in production (whether that means maximizing throughput, minimizing latency, or optimizing some other metric) regardless of how you may have trained your ML model. Through smart batching, efficient pipeline handling, and tools to simplify deployments almost anywhere, Triton helps make production inference serving simpler and more cost-effective. ### Custom Backends While Triton natively supports many common ML frameworks, you may wish to take advantage of Triton's features for something a little more specialized. Triton provides support for different kinds of models via "backends:" modular libraries which provide the specialized logic for those models. Triton allows you to create custom backends in [Python](https://github.com/triton-inference-server/python_backend), but for those who wish to use C++ directly, RAPIDS-Triton can help simplify the process of developing your backend. The goal of RAPIDS-Triton is not to facilitate every possible use case of the Triton backend API but to make the most common uses of this API easier by providing a simpler interface to them. That being said, if there is a feature of the Triton backend API which RAPIDS-Triton does not expose and which you wish to use in a custom backend, please [submit a feature request](https://github.com/rapidsai/rapids-triton/issues), and we will see if it can be added. ## Simple Example In the `cpp/src` directory of this repository, you can see a complete, annotated example of a backend built with RAPIDS-Triton. The core of any backend is defining the `predict` function for your model as shown below: ``` void predict(rapids::Batch& batch) const { rapids::Tensor<float> input = get_input<float>(batch, "input__0"); rapids::Tensor<float> output = get_output<float>(batch, "output__0"); rapids::copy(output, input); output.finalize(); } ``` In this example, we ask Triton to provide a tensor named `"input__0"` and copy it to an output tensor named `"output__0"`. Thus, our "inference" function in this simple example is just a passthrough from one input tensor to one output tensor. To do something more sophisticated in this `predict` function, we might take advantage of the `data()` method of Tensor objects, which provides a raw pointer (on host or device) to the underlying data along with `size()`, and `mem_type()` to determine the number of elements in the Tensor and whether they are stored on host or device respectively. Note that `finalize()` must be called on all output tensors before returning from the predict function. For a much more detailed look at developing backends with RAPIDS-Triton, check out our complete [usage guide](https://github.com/rapidsai/rapids-triton/blob/main/docs/usage.md). ## Contributing If you wish to contribute to RAPIDS-Triton, please see our [contributors' guide](https://github.com/rapidsai/rapids-triton/blob/main/CONTRIBUTING.md) for tips and full details on how to get started.
0
rapidsai_public_repos
rapidsai_public_repos/rapids-triton/build.sh
#!/bin/bash # Copyright (c) 2021, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. set -e REPODIR=$(cd $(dirname $0); pwd) NUMARGS=$# ARGS=$* VALIDTARGETS="example tests" VALIDFLAGS="--cpu-only -g -h --help" VALIDARGS="${VALIDTARGETS} ${VALIDFLAGS}" HELP="$0 [<target> ...] [<flag> ...] where <target> is: example - build the identity backend example tests - build container(s) with unit tests and <flag> is: -g - build for debug -h - print this text --cpu-only - build CPU-only versions of targets --tag-commit - tag docker images based on current git commit default action (no args) is to build all targets The following environment variables are also accepted to allow further customization: BASE_IMAGE - Base image for Docker images TRITON_VERSION - Triton version to use for build EXAMPLE_TAG - The tag to use for the server image TEST_TAG - The tag to use for the test image " BUILD_TYPE=Release TRITON_ENABLE_GPU=ON DOCKER_ARGS="" export DOCKER_BUILDKIT=1 function hasArg { (( ${NUMARGS} != 0 )) && (echo " ${ARGS} " | grep -q " $1 ") } function completeBuild { (( ${NUMARGS} == 0 )) && return for a in ${ARGS}; do if (echo " ${VALIDTARGETS} " | grep -q " ${a} "); then false; return fi done true } if hasArg -h || hasArg --help; then echo "${HELP}" exit 0 fi # Long arguments LONG_ARGUMENT_LIST=( "cpu-only" "tag-commit" ) # Short arguments ARGUMENT_LIST=( "g" ) # read arguments opts=$(getopt \ --longoptions "$(printf "%s," "${LONG_ARGUMENT_LIST[@]}")" \ --name "$(basename "$0")" \ --options "$(printf "%s" "${ARGUMENT_LIST[@]}")" \ -- "$@" ) if [ $? != 0 ] ; then echo "Terminating..." >&2 ; exit 1 ; fi eval set -- "$opts" while true do case "$1" in -g | --debug ) BUILD_TYPE=Debug ;; --cpu-only ) TRITON_ENABLE_GPU=OFF ;; --tag-commit ) [ -z $EXAMPLE_TAG ] \ && EXAMPLE_TAG="rapids_triton_identity:$(cd $REPODIR; git rev-parse --short HEAD)" \ || true [ -z $TEST_TAG ] \ && TEST_TAG="rapids_triton_identity_test:$(cd $REPODIR; git rev-parse --short HEAD)" \ || true ;; --) shift break ;; esac shift done if [ -z $EXAMPLE_TAG ] then EXAMPLE_TAG='rapids_triton_identity' fi if [ -z $TEST_TAG ] then TEST_TAG='rapids_triton_identity_test' fi DOCKER_ARGS="$DOCKER_ARGS --build-arg BUILD_TYPE=${BUILD_TYPE}" DOCKER_ARGS="$DOCKER_ARGS --build-arg TRITON_ENABLE_GPU=${TRITON_ENABLE_GPU}" if [ ! -z $BASE_IMAGE ] then DOCKER_ARGS="$DOCKER_ARGS --build-arg BASE_IMAGE=${BASE_IMAGE}" fi if [ ! -z $TRITON_VERSION ] then DOCKER_ARGS="$DOCKER_ARGS --build-arg TRITON_VERSION=${TRITON_VERSION}" fi if completeBuild || hasArg example then BACKEND=1 DOCKER_ARGS="$DOCKER_ARGS --build-arg BUILD_EXAMPLE=ON" fi if completeBuild || hasArg tests then TESTS=1 DOCKER_ARGS="$DOCKER_ARGS --build-arg BUILD_TESTS=ON" fi if [ $BACKEND -eq 1 ] then docker build \ $DOCKER_ARGS \ -t "$EXAMPLE_TAG" \ $REPODIR fi if [ $TESTS -eq 1 ] then docker build \ $DOCKER_ARGS \ -t "$EXAMPLE_TAG" \ --target test-stage \ -t "$TEST_TAG" \ $REPODIR fi
0
rapidsai_public_repos
rapidsai_public_repos/rapids-triton/.dockerignore
cpp/build
0
rapidsai_public_repos
rapidsai_public_repos/rapids-triton/Dockerfile
# syntax=docker/dockerfile:experimental # Copyright (c) 2021-2022, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ########################################################################################### # Arguments for controlling build details ########################################################################################### # Version of Triton to use ARG TRITON_VERSION=22.08 # Base container image ARG BASE_IMAGE=nvcr.io/nvidia/tritonserver:${TRITON_VERSION}-py3 # Whether or not to build indicated components ARG BUILD_TESTS=OFF ARG BUILD_EXAMPLE=ON # Whether or not to enable GPU build ARG TRITON_ENABLE_GPU=ON FROM ${BASE_IMAGE} as base ENV PATH="/root/miniconda3/bin:${PATH}" RUN apt-get update \ && apt-get install --no-install-recommends -y wget patchelf \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* ENV PYTHONDONTWRITEBYTECODE=true RUN wget \ https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \ && mkdir /root/.conda \ && bash Miniconda3-latest-Linux-x86_64.sh -b \ && rm -f Miniconda3-latest-Linux-x86_64.sh COPY ./conda/environments/rapids_triton_dev.yml /environment.yml RUN conda env update -f /environment.yml \ && rm /environment.yml \ && conda clean -afy \ && find /root/miniconda3/ -follow -type f -name '*.pyc' -delete \ && find /root/miniconda3/ -follow -type f -name '*.js.map' -delete ENV PYTHONDONTWRITEBYTECODE=false SHELL ["conda", "run", "--no-capture-output", "-n", "rapids_triton_dev", "/bin/bash", "-c"] FROM base as build-stage COPY ./cpp /rapids_triton ARG TRITON_VERSION ENV TRITON_VERSION=$TRITON_VERSION ARG BUILD_TYPE=Release ENV BUILD_TYPE=$BUILD_TYPE ARG BUILD_TESTS ENV BUILD_TESTS=$BUILD_TESTS ARG BUILD_EXAMPLE ENV BUILD_EXAMPLE=$BUILD_EXAMPLE ARG TRITON_ENABLE_GPU ENV TRITON_ENABLE_GPU=$TRITON_ENABLE_GPU RUN mkdir /rapids_triton/build WORKDIR /rapids_triton/build RUN cmake \ -GNinja \ -DCMAKE_BUILD_TYPE="${BUILD_TYPE}" \ -DBUILD_TESTS="${BUILD_TESTS}" \ -DBUILD_EXAMPLE="${BUILD_EXAMPLE}" \ -DTRITON_ENABLE_GPU="${TRITON_ENABLE_GPU}" \ .. ENV CCACHE_DIR=/ccache RUN --mount=type=cache,target=/ccache/ ninja install FROM base as test-install COPY ./conda/environments/rapids_triton_test.yml /environment.yml RUN conda env update -f /environment.yml \ && rm /environment.yml \ && conda clean -afy \ && find /root/miniconda3/ -follow -type f -name '*.pyc' -delete \ && find /root/miniconda3/ -follow -type f -name '*.js.map' -delete COPY ./python /rapids_triton RUN conda run -n rapids_triton_test pip install /rapids_triton \ && rm -rf /rapids_triton FROM build-stage as test-stage COPY --from=test-install /root/miniconda3 /root/miniconda3 ENV TEST_EXE=/rapids_triton/build/test_rapids_triton COPY qa /qa ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "rapids_triton_test", "/bin/bash", "/qa/entrypoint.sh"] FROM ${BASE_IMAGE} RUN mkdir /models # Remove existing backend install RUN if [ -d /opt/tritonserver/backends/rapids-identity ]; \ then \ rm -rf /opt/tritonserver/backends/rapids-identity/*; \ fi COPY --from=build-stage \ /opt/tritonserver/backends/rapids-identity \ /opt/tritonserver/backends/rapids-identity ENTRYPOINT ["tritonserver", "--model-repository=/models"]
0
rapidsai_public_repos
rapidsai_public_repos/rapids-triton/CONTRIBUTING.md
<!-- Copyright (c) 2021, NVIDIA CORPORATION. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- TODO(wphicks): Add more detail --> # Contributing to RAPIDS-Triton You can help improve RAPIDS-Triton in any of the following ways: - Submitting a bug report, feature request or documentation issue - Proposing and implementing a new feature - Implementing a feature or bug-fix for an outstanding issue ## Bug reports When submitting a bug report, please include a *minimum* *reproducible* example. Ideally, this should be a snippet of code that other developers can copy, paste, and immediately run to try to reproduce the error. Please: - Do include import statements and any other code necessary to immediately run your example - Avoid examples that require other developers to download models or data unless you cannot reproduce the problem with synthetically-generated data ## Code Contributions To contribute code to this project, please follow these steps: 1. Find an issue to work on or submit an issue documenting the problem you would like to work on. 2. Comment on the issue saying that you plan to work on it. 3. Review the conventions below for information to help you make your changes in a way that is consistent with the rest of the codebase. 4. Code! 5. Create your pull request. 6. Wait for other developers to review your code and update your PR as needed. 7. Once a PR is approved, it will be merged into the main branch. ### Coding Conventions * RAPIDS-Triton follows [Almost Always Auto (AAA)](https://herbsutter.com/2013/08/12/gotw-94-solution-aaa-style-almost-always-auto/) style. Please maintain this style in any contributions, with the possible exception of some docs, where type information may be helpful for new users trying to understand a snippet in isolation. * Avoid raw loops where possible. * C++ versions of types should be used instead of C versions except when interfacing with C code (e.g. use `std::size_t` instead of `size_t`). * Avoid using output pointers in function signatures. Prefer instead to actually return the value computed by the function and take advantage of return value optimization and move semantics. ### Signing Your Work * We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license. * Any contribution which contains commits that are not Signed-Off will not be accepted. * To sign off on a commit you simply use the `--signoff` (or `-s`) option when committing your changes: ```bash $ git commit -s -m "Add cool feature." ``` This will append the following to your commit message: ``` Signed-off-by: Your Name <your@email.com> ``` * Full text of the DCO: ``` Developer Certificate of Origin Version 1.1 Copyright (C) 2004, 2006 The Linux Foundation and its contributors. 1 Letterman Drive Suite D4700 San Francisco, CA, 94129 Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Developer's Certificate of Origin 1.1 By making a contribution to this project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it. (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved. ```
0
rapidsai_public_repos
rapidsai_public_repos/rapids-triton/LICENSE
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "{}" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2021 NVIDIA CORPORATION Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
rapidsai_public_repos/rapids-triton
rapidsai_public_repos/rapids-triton/python/pyproject.toml
[build-system] requires = ["setuptools", "wheel"] build-backend = "setuptools.build_meta"
0
rapidsai_public_repos/rapids-triton
rapidsai_public_repos/rapids-triton/python/setup.py
# Copyright (c) 2021-2022, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from setuptools import setup, find_packages setup( name='rapids_triton', description="Tools for clients to RAPIDS-Triton backends", version='22.02.00', # TODO(wphicks): versioneer author='NVIDIA Corporation', license='Apache', packages=find_packages(), install_requires=[ 'numpy', 'tritonclient[all]' ] )
0
rapidsai_public_repos/rapids-triton/python
rapidsai_public_repos/rapids-triton/python/rapids_triton/testing.py
# Copyright (c) 2021, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import numpy as np from rapids_triton.logging import logger from rapids_triton.triton.client import STANDARD_PORTS from rapids_triton.client import Client def arrays_close( a, b, atol=None, rtol=None, total_atol=None, total_rtol=None, assert_close=False): """ Compare numpy arrays for approximate equality :param numpy.array a: The array to compare against a reference value :param numpy.array b: The reference array to compare against :param float atol: The maximum absolute difference allowed between an element in a and an element in b before they are considered non-close. If both atol and rtol are set to None, atol is assumed to be 0. If atol is set to None and rtol is not None, no absolute threshold is used in comparisons. :param float rtol: The maximum relative difference allowed between an element in a and an element in b before they are considered non-close. If rtol is set to None, no relative threshold is used in comparisons. :param int total_atol: The maximum number of elements allowed to be non-close before the arrays are considered non-close. :param float total_rtol: The maximum proportion of elements allowed to be non-close before the arrays are considered non-close. """ if np.any(a.shape != b.shape): if assert_close: raise AssertionError( "Arrays have different shapes:\n{} vs. {}".format( a.shape, b.shape ) ) return False if a.size == 0 and b.size == 0: return True if atol is None and rtol is None: atol = 0 if total_atol is None and total_rtol is None: total_atol = 0 diff_mask = np.ones(a.shape, dtype='bool') diff = np.abs(a-b) if atol is not None: diff_mask = np.logical_and(diff_mask, diff > atol) if rtol is not None: diff_mask = np.logical_and(diff_mask, diff > rtol * np.abs(b)) is_close = True mismatch_count = np.sum(diff_mask) if total_atol is not None and mismatch_count > total_atol: is_close = False mismatch_proportion = mismatch_count / a.size if total_rtol is not None and mismatch_proportion > total_rtol: is_close = False if assert_close and not is_close: total_tol_desc = [] if total_atol is not None: total_tol_desc.append(str(int(total_atol))) if total_rtol is not None: total_tol_desc.append( "{:.2f} %".format(total_rtol * 100) ) total_tol_desc = " or ".join(total_tol_desc) msg = """Arrays have more than {} mismatched elements. Mismatch in {} ({:.2f} %) elements a: {} b: {} Mismatched indices: {}""".format( total_tol_desc, mismatch_count, mismatch_proportion * 100, a, b, np.transpose(np.nonzero(diff_mask))) raise AssertionError(msg) return is_close def get_random_seed(): """Provide random seed to allow for easer reproduction of testing failures Note: Code taken directly from cuML testing infrastructure""" current_random_seed = os.getenv('PYTEST_RANDOM_SEED') if current_random_seed is not None and current_random_seed.isdigit(): random_seed = int(current_random_seed) else: random_seed = np.random.randint(0, 1e6) os.environ['PYTEST_RANDOM_SEED'] = str(random_seed) logger.info("Random seed value: %d", random_seed) return random_seed
0
rapidsai_public_repos/rapids-triton/python
rapidsai_public_repos/rapids-triton/python/rapids_triton/exceptions.py
# Copyright (c) 2022, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. class IncompatibleSharedMemory(Exception): """Error thrown if operation cannot be completed with given shared memory type"""
0
rapidsai_public_repos/rapids-triton/python
rapidsai_public_repos/rapids-triton/python/rapids_triton/logging.py
# Copyright (c) 2021, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging logger = logging.getLogger('rapids_triton') logger.setLevel(logging.INFO)
0
rapidsai_public_repos/rapids-triton/python
rapidsai_public_repos/rapids-triton/python/rapids_triton/__init__.py
# Copyright (c) 2021, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from rapids_triton.client import Client from rapids_triton.logging import logger
0
rapidsai_public_repos/rapids-triton/python
rapidsai_public_repos/rapids-triton/python/rapids_triton/client.py
# Copyright (c) 2021, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from collections import namedtuple import concurrent.futures import time from rapids_triton.triton.client import get_triton_client from rapids_triton.triton.io import ( create_triton_input, create_triton_output, destroy_shared_memory_region ) from rapids_triton.triton.dtype import dtype_to_triton_name from rapids_triton.triton.response import get_response_data from tritonclient import utils as triton_utils # TODO(wphicks): Propagate device ids for cuda shared memory MultiModelOutput = namedtuple('MultiModelOutput', ('name', 'version', 'output')) class Client(object): def __init__( self, protocol='grpc', host='localhost', port=None, concurrency=4): self.triton_client = get_triton_client( protocol=protocol, host=host, port=port, concurrency=concurrency ) self._protocol = protocol @property def protocol(self): return self._protocol def create_inputs(self, array_inputs, shared_mem=None): return [ create_triton_input( self.triton_client, arr, name, dtype_to_triton_name(arr.dtype), protocol=self.protocol, shared_mem=shared_mem ) for name, arr in array_inputs.items() ] def create_outputs(self, output_sizes, shared_mem=None): return { name: create_triton_output( self.triton_client, size, name, protocol=self.protocol, shared_mem=shared_mem ) for name, size in output_sizes.items() } def wait_for_server(self, timeout): server_wait_start = time.time() while True: try: if self.triton_client.is_server_ready(): break except triton_utils.InferenceServerException: pass if time.time() - server_wait_start > timeout: raise RuntimeError("Server startup timeout expired") time.sleep(1) def clear_shared_memory(self): self.triton_client.unregister_cuda_shared_memory() self.triton_client.unregister_system_shared_memory() def release_io(self, io_objs): for io_ in io_objs: if io_.name is not None: self.triton_client.unregister_cuda_shared_memory( name=io_.name ) destroy_shared_memory_region( io_.handle, shared_mem='cuda' ) def get_model_config(self, model_name): return self.triton_client.get_model_config(model_name).config def predict( self, model_name, input_data, output_sizes, model_version='1', shared_mem=None, attempts=1): model_version = str(model_version) try: inputs = self.create_inputs(input_data, shared_mem=shared_mem) outputs = self.create_outputs(output_sizes, shared_mem=shared_mem) response = self.triton_client.infer( model_name, model_version=model_version, inputs=[input_.input for input_ in inputs], outputs=[output_.output for output_ in outputs.values()] ) result = { name: get_response_data(response, handle, name) for name, (_, handle, _) in outputs.items() } self.release_io(inputs) self.release_io(outputs.values()) except triton_utils.InferenceServerException: if attempts > 1: return self.predict( model_name, input_data, output_sizes, model_version=model_version, shared_mem=shared_mem, attempts=attempts - 1 ) raise return result def predict_async( self, model_name, input_data, output_sizes, model_version='1', shared_mem=None, attempts=1): model_version = str(model_version) inputs = self.create_inputs(input_data, shared_mem=shared_mem) outputs = self.create_outputs(output_sizes, shared_mem=shared_mem) future_result = concurrent.futures.Future() def callback(result, error): if error is None: output_arrays = { name: get_response_data(result, handle, name) for name, (_, handle, _) in outputs.items() } future_result.set_result(output_arrays) self.release_io(outputs.values()) else: if isinstance(error, triton_utils.InferenceServerException): if attempts > 1: future_result.set_result(self.predict( model_name, input_data, output_sizes, model_version=model_version, shared_mem=shared_mem, attempts=attempts - 1 )) future_result.set_exception(error) self.triton_client.async_infer( model_name, model_version=model_version, inputs=[input_.input for input_ in inputs], outputs=[output_.output for output_ in outputs.values()], callback=callback ) if shared_mem is not None: def release_callback(fut): self.release_io(inputs) future_result.add_done_callback(release_callback) return future_result def predict_multimodel_async( self, model_names, input_data, output_sizes, model_versions=('1',), shared_mem=None, executor=None, attempts=1): all_models = [ (name, str(version)) for name in model_names for version in model_versions ] inputs = self.create_inputs(input_data, shared_mem=shared_mem) all_future_results = [] for model_name, version in all_models: outputs = self.create_outputs(output_sizes, shared_mem=shared_mem) def create_callback(future_result, outputs): def callback(result, error): if error is None: output_arrays = { name: get_response_data(result, handle, name) for name, (_, handle, _) in outputs.items() } future_result.set_result( MultiModelOutput( name=model_name, version=version, output=output_arrays ) ) self.release_io(outputs.values()) else: if isinstance(error, triton_utils.InferenceServerException): if attempts > 1: future_result.set_result(self.predict( model_name, input_data, output_sizes, model_version=version, shared_mem=shared_mem, attempts=attempts - 1 )) future_result.set_exception(error) return callback all_future_results.append(concurrent.futures.Future()) self.triton_client.async_infer( model_name, model_version=version, inputs=[input_.input for input_ in inputs], outputs=[output_.output for output_ in outputs.values()], callback=create_callback(all_future_results[-1], outputs) ) def wait_for_all(future_results, releasable_inputs): concurrent.futures.wait(future_results) self.release_io(releasable_inputs) return [fut.result() for fut in future_results] if executor is None: with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor: return executor.submit(wait_for_all, all_future_results, inputs) else: return executor.submit(wait_for_all, all_future_results, inputs)
0
rapidsai_public_repos/rapids-triton/python/rapids_triton
rapidsai_public_repos/rapids-triton/python/rapids_triton/utils/safe_import.py
# Copyright (c) 2022, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. class ImportUnavailableError(Exception): '''Error thrown if a symbol is unavailable due to an issue importing it''' class ImportReplacement: """A class to be used in place of an importable symbol if that symbol cannot be imported Parameters ---------- symbol: str The name or import path to be used in error messages when attempting to make use of this symbol. E.g. "some_pkg.func" would result in an exception with message "some_pkg.func could not be imported" """ def __init__(self, symbol): self._msg = f'{symbol} could not be imported' def __getattr__(self, name): raise ImportUnavailableError(self._msg) def __call__(self, *args, **kwargs): raise ImportUnavailableError(self._msg)
0
rapidsai_public_repos/rapids-triton/python/rapids_triton
rapidsai_public_repos/rapids-triton/python/rapids_triton/triton/io.py
# Copyright (c) 2021, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from collections import namedtuple from uuid import uuid4 import tritonclient.http as triton_http import tritonclient.grpc as triton_grpc from rapids_triton.utils.safe_import import ImportReplacement from rapids_triton.exceptions import IncompatibleSharedMemory from tritonclient import utils as triton_utils try: import tritonclient.utils.cuda_shared_memory as shm except OSError: # CUDA libraries not available shm = ImportReplacement('tritonclient.utils.cuda_shared_memory') TritonInput = namedtuple('TritonInput', ('name', 'handle', 'input')) TritonOutput = namedtuple('TritonOutput', ('name', 'handle', 'output')) def set_unshared_input_data(triton_input, data, protocol='grpc'): if protocol == 'grpc': triton_input.set_data_from_numpy(data) else: triton_input.set_data_from_numpy(data, binary_data=True) return TritonInput(None, None, triton_input) def set_shared_input_data(triton_client, triton_input, data, protocol='grpc'): input_size = data.size * data.itemsize input_name = 'input_{}'.format(uuid4().hex) input_handle = shm.create_shared_memory_region( input_name, input_size, 0 ) shm.set_shared_memory_region(input_handle, [data]) triton_client.register_cuda_shared_memory( input_name, shm.get_raw_handle(input_handle), 0, input_size ) triton_input.set_shared_memory(input_name, input_size) return TritonInput(input_name, input_handle, triton_input) def set_input_data( triton_client, triton_input, data, protocol='grpc', shared_mem=None): if shared_mem is None: return set_unshared_input_data( triton_input, data, protocol=protocol ) if shared_mem == 'cuda': return set_shared_input_data( triton_client, triton_input, data, protocol=protocol ) raise RuntimeError("Unsupported shared memory type") def create_triton_input( triton_client, data, name, dtype, protocol='grpc', shared_mem=None): if protocol == 'grpc': triton_input = triton_grpc.InferInput(name, data.shape, dtype) else: triton_input = triton_http.InferInput(name, data.shape, dtype) return set_input_data( triton_client, triton_input, data, protocol=protocol, shared_mem=shared_mem ) def create_output_handle(triton_client, triton_output, size, shared_mem=None): if shared_mem is None: return (None, None) output_name = 'output_{}'.format(uuid4().hex) output_handle = shm.create_shared_memory_region( output_name, size, 0 ) triton_client.register_cuda_shared_memory( output_name, shm.get_raw_handle(output_handle), 0, size ) triton_output.set_shared_memory(output_name, size) return output_name, output_handle def create_triton_output( triton_client, size, name, protocol='grpc', shared_mem=None): """Set up output memory in Triton Parameters ---------- triton_client : Triton client object The client used to set output parameters size : int The size of the output in bytes name : str The model-defined name for this output protocol : 'grpc' or 'http' The protocol used for communication with the server """ if protocol == 'grpc': triton_output = triton_grpc.InferRequestedOutput(name) else: triton_output = triton_grpc.InferRequestedOutput( name, binary_data=True ) output_name, output_handle = create_output_handle( triton_client, triton_output, size, shared_mem=shared_mem ) return TritonOutput( name=output_name, handle=output_handle, output=triton_output ) def destroy_shared_memory_region(handle, shared_mem='cuda'): """Release memory from a given shared memory handle Parameters ---------- handle : c_void_p The handle (as returned by the Triton client) for the region to be released. shared_mem : 'cuda' or 'system' or None The type of shared memory region to release. If None, an exception will be thrown. """ if shared_mem is None: raise IncompatibleSharedMemory( "Attempting to release non-shared memory" ) elif shared_mem == 'system': raise NotImplementedError( "System shared memory not yet supported" ) elif shared_mem == 'cuda': shm.destroy_shared_memory_region(handle) else: raise NotImplementedError( f"Unrecognized memory type {shared_mem}" )
0
rapidsai_public_repos/rapids-triton/python/rapids_triton
rapidsai_public_repos/rapids-triton/python/rapids_triton/triton/response.py
# Copyright (c) 2021, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from tritonclient import utils as triton_utils from rapids_triton.triton.message import TritonMessage from rapids_triton.utils.safe_import import ImportReplacement try: import tritonclient.utils.cuda_shared_memory as shm except OSError: # CUDA libraries not available shm = ImportReplacement('tritonclient.utils.cuda_shared_memory') def get_response_data(response, output_handle, output_name): """Convert Triton response to NumPy array""" if output_handle is None: return response.as_numpy(output_name) else: network_result = TritonMessage( response.get_output(output_name) ) return shm.get_contents_as_numpy( output_handle, triton_utils.triton_to_np_dtype(network_result.datatype), network_result.shape )
0
rapidsai_public_repos/rapids-triton/python/rapids_triton
rapidsai_public_repos/rapids-triton/python/rapids_triton/triton/message.py
# Copyright (c) 2021, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. class TritonMessage: """Adapter to read output from both GRPC and HTTP responses""" def __init__(self, message): self.message = message def __getattr__(self, attr): try: return getattr(self.message, attr) except AttributeError: try: return self.message[attr] except Exception: # Re-raise AttributeError pass raise
0
rapidsai_public_repos/rapids-triton/python/rapids_triton
rapidsai_public_repos/rapids-triton/python/rapids_triton/triton/__init__.py
# Copyright (c) 2021, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
0
rapidsai_public_repos/rapids-triton/python/rapids_triton
rapidsai_public_repos/rapids-triton/python/rapids_triton/triton/client.py
# Copyright (c) 2021, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import tritonclient.http as triton_http import tritonclient.grpc as triton_grpc STANDARD_PORTS = { 'http': 8000, 'grpc': 8001 } def get_triton_client( protocol="grpc", host='localhost', port=None, concurrency=4): """Get Triton client instance of desired type """ if port is None: port = STANDARD_PORTS[protocol] if protocol == 'grpc': client = triton_grpc.InferenceServerClient( url=f'{host}:{port}', verbose=False ) elif protocol == 'http': client = triton_http.InferenceServerClient( url=f'{host}:{port}', verbose=False, concurrency=concurrency ) else: raise RuntimeError('Bad protocol: "{}"'.format(protocol)) return client
0
rapidsai_public_repos/rapids-triton/python/rapids_triton
rapidsai_public_repos/rapids-triton/python/rapids_triton/triton/dtype.py
# Copyright (c) 2021, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import numpy as np DTYPE_NAMES = { np.dtype('bool').str: 'BOOL', np.dtype('uint8').str: 'UINT8', np.dtype('uint16').str: 'UINT16', np.dtype('uint32').str: 'UINT32', np.dtype('uint64').str: 'UINT64', np.dtype('int8').str: 'INT8', np.dtype('int16').str: 'INT16', np.dtype('int32').str: 'INT32', np.dtype('int64').str: 'INT64', np.dtype('float16').str: 'FP16', np.dtype('float32').str: 'FP32', np.dtype('float64').str: 'FP64' } def dtype_to_triton_name(dtype): dtype = np.dtype(dtype).str return DTYPE_NAMES.get(dtype, 'BYTES')
0
rapidsai_public_repos/rapids-triton/conda
rapidsai_public_repos/rapids-triton/conda/environments/rapids_triton_test.yml
--- name: rapids_triton_test channels: - conda-forge dependencies: - flake8 - pip - python - pytest - numpy - pip: - tritonclient[all]
0
rapidsai_public_repos/rapids-triton/conda
rapidsai_public_repos/rapids-triton/conda/environments/rapids_triton_dev.yml
--- name: rapids_triton_dev channels: - conda-forge dependencies: - ccache - cmake>=3.23.1,!=3.25.0 - libstdcxx-ng<=11.2.0 - libgcc-ng<=11.2.0 - ninja - rapidjson
0
rapidsai_public_repos/rapids-triton
rapidsai_public_repos/rapids-triton/cpp/CMakeLists.txt
#============================================================================= # Copyright (c) 2021-2022, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #============================================================================= cmake_minimum_required(VERSION 3.21 FATAL_ERROR) file(DOWNLOAD https://raw.githubusercontent.com/rapidsai/rapids-cmake/branch-22.02/RAPIDS.cmake ${CMAKE_BINARY_DIR}/RAPIDS.cmake) include(${CMAKE_BINARY_DIR}/RAPIDS.cmake) include(rapids-cmake) include(rapids-cpm) include(rapids-cuda) include(rapids-export) include(rapids-find) ############################################################################## # - User Options ------------------------------------------------------------ option(TRITON_ENABLE_GPU "Enable GPU support in Triton" ON) option(BUILD_TESTS "Build rapids_triton unit-tests" ON) option(BUILD_EXAMPLE "Build rapids_identity example backend" OFF) option(CUDA_ENABLE_KERNELINFO "Enable kernel resource usage info" OFF) option(CUDA_ENABLE_LINEINFO "Enable the -lineinfo option for nvcc (useful for cuda-memcheck / profiler)" OFF) option(CUDA_STATIC_RUNTIME "Statically link the CUDA runtime" OFF) option(DETECT_CONDA_ENV "Enable detection of conda environment for dependencies" ON) option(DISABLE_DEPRECATION_WARNINGS "Disable depreaction warnings " ON) option(NVTX "Enable nvtx markers" OFF) option(TRITON_ENABLE_STATS "Enable statistics collection in Triton" ON) set(TRITON_COMMON_REPO_TAG "r21.12" CACHE STRING "Tag for triton-inference-server/common repo") set(TRITON_CORE_REPO_TAG "r21.12" CACHE STRING "Tag for triton-inference-server/core repo") set(TRITON_BACKEND_REPO_TAG "r21.12" CACHE STRING "Tag for triton-inference-server/backend repo") message(VERBOSE "RAPIDS_TRITON: Build RAPIDS_TRITON unit-tests: ${BUILD_TESTS}") message(VERBOSE "RAPIDS_TRITON: Enable detection of conda environment for dependencies: ${DETECT_CONDA_ENV}") message(VERBOSE "RAPIDS_TRITON: Disable depreaction warnings " ${DISABLE_DEPRECATION_WARNINGS}) message(VERBOSE "RAPIDS_TRITON: Enable kernel resource usage info: ${CUDA_ENABLE_KERNELINFO}") message(VERBOSE "RAPIDS_TRITON: Enable lineinfo in nvcc: ${CUDA_ENABLE_LINEINFO}") message(VERBOSE "RAPIDS_TRITON: Enable nvtx markers: ${NVTX}") message(VERBOSE "RAPIDS_TRITON: Statically link the CUDA runtime: ${CUDA_STATIC_RUNTIME}") message(VERBOSE "RAPIDS_TRITON: Enable GPU support: ${TRITON_ENABLE_GPU}") message(VERBOSE "RAPIDS_TRITON: Enable statistics collection in Triton: ${TRITON_ENABLE_STATS}") message(VERBOSE "RAPIDS_TRITON: Triton common repo tag: ${TRITON_COMMON_REPO_TAG}") message(VERBOSE "RAPIDS_TRITON: Triton core repo tag: ${TRITON_CORE_REPO_TAG}") message(VERBOSE "RAPIDS_TRITON: Triton backend repo tag: ${TRITON_BACKEND_REPO_TAG}") ############################################################################## # - Project Initialization --------------------------------------------------- if(TRITON_ENABLE_GPU) rapids_cuda_init_architectures(RAPIDS_TRITON) project(RAPIDS_TRITON VERSION 22.02.00 LANGUAGES CXX CUDA) else() project(RAPIDS_TRITON VERSION 22.02.00 LANGUAGES CXX) endif() ############################################################################## # - build type --------------------------------------------------------------- # Set a default build type if none was specified rapids_cmake_build_type(Release) # this is needed for clang-tidy runs set(CMAKE_EXPORT_COMPILE_COMMANDS ON) # Set RMM logging level set(RMM_LOGGING_LEVEL "INFO" CACHE STRING "Choose the logging level.") set_property(CACHE RMM_LOGGING_LEVEL PROPERTY STRINGS "TRACE" "DEBUG" "INFO" "WARN" "ERROR" "CRITICAL" "OFF") message(VERBOSE "RAPIDS_TRITON: RMM_LOGGING_LEVEL = '${RMM_LOGGING_LEVEL}'.") ############################################################################## # - Conda environment detection ---------------------------------------------- if(DETECT_CONDA_ENV) rapids_cmake_support_conda_env( conda_env MODIFY_PREFIX_PATH ) if (CMAKE_INSTALL_PREFIX_INITIALIZED_TO_DEFAULT AND DEFINED ENV{CONDA_PREFIX}) message(STATUS "RAPIDS_TRITON: No CMAKE_INSTALL_PREFIX argument detected, setting to: $ENV{CONDA_PREFIX}") set(CMAKE_INSTALL_PREFIX "$ENV{CONDA_PREFIX}") endif() endif() ############################################################################## # - compiler options --------------------------------------------------------- set(CMAKE_C_COMPILER_LAUNCHER ccache) set(CMAKE_CXX_COMPILER_LAUNCHER ccache) if(TRITON_ENABLE_GPU) set(CMAKE_CUDA_COMPILER_LAUNCHER ccache) # * find CUDAToolkit package # * determine GPU architectures # * enable the CMake CUDA language # * set other CUDA compilation flags rapids_find_package(CUDAToolkit REQUIRED BUILD_EXPORT_SET rapids_triton-exports INSTALL_EXPORT_SET rapids_triton-exports ) include(cmake/modules/ConfigureCUDA.cmake) endif() ############################################################################## # - Requirements ------------------------------------------------------------- # add third party dependencies using CPM rapids_cpm_init() if(TRITON_ENABLE_GPU) include(cmake/thirdparty/get_rmm.cmake) include(cmake/thirdparty/get_raft.cmake) endif() include(cmake/thirdparty/get_rapidjson.cmake) include(cmake/thirdparty/get_triton.cmake) if(BUILD_TESTS) include(cmake/thirdparty/get_gtest.cmake) endif() ############################################################################## # - install targets----------------------------------------------------------- add_library(rapids_triton INTERFACE) add_library(rapids_triton::rapids_triton ALIAS rapids_triton) target_include_directories(rapids_triton INTERFACE "$<BUILD_INTERFACE:${RAPIDS_TRITON_SOURCE_DIR}/include>" "$<INSTALL_INTERFACE:include>") target_link_libraries(rapids_triton INTERFACE $<$<BOOL:${TRITON_ENABLE_GPU}>:rmm::rmm> $<$<BOOL:${TRITON_ENABLE_GPU}>:raft::raft> triton-core-serverstub triton-backend-utils ) if (TRITON_ENABLE_GPU) target_compile_features( rapids_triton INTERFACE cxx_std_17 $<BUILD_INTERFACE:cuda_std_17> ) else() target_compile_features( rapids_triton INTERFACE cxx_std_17 ) endif() rapids_cmake_install_lib_dir(lib_dir) install(TARGETS rapids_triton DESTINATION ${lib_dir} EXPORT rapids_triton-exports ) include(GNUInstallDirs) install(DIRECTORY include/rapids_triton/ DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/rapids_triton ) # Temporary install of rapids_triton.hpp while the file is removed install(FILES include/rapids_triton.hpp DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/rapids_triton ) ############################################################################## # - install export ----------------------------------------------------------- set(doc_string [=[ Provide targets for RAPIDS_TRITON. RAPIDS_TRITON is a header-only library designed to make it easier and faster to integrate RAPIDS algorithms as Triton backends. ]=]) rapids_export(INSTALL rapids_triton EXPORT_SET rapids_triton-exports GLOBAL_TARGETS rapids_triton # since we can't hook into EXPORT SETS NAMESPACE rapids_triton:: DOCUMENTATION doc_string ) ############################################################################## # - build export ------------------------------------------------------------- rapids_export(BUILD rapids_triton EXPORT_SET rapids_triton-exports GLOBAL_TARGETS rapids_triton # since we can't hook into EXPORT SETS LANGUAGES CUDA DOCUMENTATION doc_string NAMESPACE rapids_triton:: ) ############################################################################## # - build test executable ---------------------------------------------------- if(BUILD_TESTS) include(test/CMakeLists.txt) endif() ############################################################################## # - build example backend ---------------------------------------------------- if(BUILD_EXAMPLE) include(src/CMakeLists.txt) endif() ############################################################################## # - doxygen targets ---------------------------------------------------------- # TODO(wphicks) # include(cmake/doxygen.cmake) # add_doxygen_target(IN_DOXYFILE Doxyfile.in # OUT_DOXYFILE ${CMAKE_CURRENT_BINARY_DIR}/Doxyfile # CWD ${CMAKE_CURRENT_BINARY_DIR})
0
rapidsai_public_repos/rapids-triton
rapidsai_public_repos/rapids-triton/cpp/.clang-format
--- # Refer to the following link for the explanation of each params: # http://releases.llvm.org/8.0.0/tools/clang/docs/ClangFormatStyleOptions.html Language: Cpp # BasedOnStyle: Google AccessModifierOffset: -1 AlignAfterOpenBracket: Align AlignConsecutiveAssignments: true AlignConsecutiveBitFields: true AlignConsecutiveDeclarations: false AlignConsecutiveMacros: true AlignEscapedNewlines: Left AlignOperands: true AlignTrailingComments: true AllowAllArgumentsOnNextLine: true AllowAllConstructorInitializersOnNextLine: true AllowAllParametersOfDeclarationOnNextLine: true AllowShortBlocksOnASingleLine: true AllowShortCaseLabelsOnASingleLine: true AllowShortEnumsOnASingleLine: true AllowShortFunctionsOnASingleLine: All AllowShortIfStatementsOnASingleLine: true AllowShortLambdasOnASingleLine: true AllowShortLoopsOnASingleLine: false # This is deprecated AlwaysBreakAfterDefinitionReturnType: None AlwaysBreakAfterReturnType: None AlwaysBreakBeforeMultilineStrings: true AlwaysBreakTemplateDeclarations: Yes BinPackArguments: false BinPackParameters: false BraceWrapping: AfterClass: false AfterControlStatement: false AfterEnum: false AfterFunction: false AfterNamespace: false AfterObjCDeclaration: false AfterStruct: false AfterUnion: false AfterExternBlock: false BeforeCatch: false BeforeElse: false IndentBraces: false # disabling the below splits, else, they'll just add to the vertical length of source files! SplitEmptyFunction: false SplitEmptyRecord: false SplitEmptyNamespace: false BreakAfterJavaFieldAnnotations: false BreakBeforeBinaryOperators: None BreakBeforeBraces: WebKit BreakBeforeInheritanceComma: false BreakBeforeTernaryOperators: true BreakConstructorInitializersBeforeComma: false BreakConstructorInitializers: BeforeColon BreakInheritanceList: BeforeColon BreakStringLiterals: true ColumnLimit: 100 CommentPragmas: '^ IWYU pragma:' CompactNamespaces: false ConstructorInitializerAllOnOneLineOrOnePerLine: true # Kept the below 2 to be the same as `IndentWidth` to keep everything uniform ConstructorInitializerIndentWidth: 2 ContinuationIndentWidth: 2 Cpp11BracedListStyle: true DerivePointerAlignment: false DisableFormat: false ExperimentalAutoDetectBinPacking: false FixNamespaceComments: true ForEachMacros: - foreach - Q_FOREACH - BOOST_FOREACH IncludeBlocks: Preserve IncludeCategories: - Regex: '^<ext/.*\.h>' Priority: 2 - Regex: '^<.*\.h>' Priority: 1 - Regex: '^<.*' Priority: 2 - Regex: '.*' Priority: 3 IncludeIsMainRegex: '([-_](test|unittest))?$' IndentCaseLabels: true IndentPPDirectives: None IndentWidth: 2 IndentWrappedFunctionNames: false JavaScriptQuotes: Leave JavaScriptWrapImports: true KeepEmptyLinesAtTheStartOfBlocks: false MacroBlockBegin: '' MacroBlockEnd: '' MaxEmptyLinesToKeep: 1 NamespaceIndentation: None ObjCBinPackProtocolList: Never ObjCBlockIndentWidth: 2 ObjCSpaceAfterProperty: false ObjCSpaceBeforeProtocolList: true PenaltyBreakAssignment: 2 PenaltyBreakBeforeFirstCallParameter: 1 PenaltyBreakComment: 300 PenaltyBreakFirstLessLess: 120 PenaltyBreakString: 1000 PenaltyBreakTemplateDeclaration: 10 PenaltyExcessCharacter: 1000000 PenaltyReturnTypeOnItsOwnLine: 200 PointerAlignment: Left RawStringFormats: - Language: Cpp Delimiters: - cc - CC - cpp - Cpp - CPP - 'c++' - 'C++' CanonicalDelimiter: '' - Language: TextProto Delimiters: - pb - PB - proto - PROTO EnclosingFunctions: - EqualsProto - EquivToProto - PARSE_PARTIAL_TEXT_PROTO - PARSE_TEST_PROTO - PARSE_TEXT_PROTO - ParseTextOrDie - ParseTextProtoOrDie CanonicalDelimiter: '' BasedOnStyle: google # Enabling comment reflow causes doxygen comments to be messed up in their formats! ReflowComments: true SortIncludes: true SortUsingDeclarations: true SpaceAfterCStyleCast: false SpaceAfterTemplateKeyword: true SpaceBeforeAssignmentOperators: true SpaceBeforeCpp11BracedList: false SpaceBeforeCtorInitializerColon: true SpaceBeforeInheritanceColon: true SpaceBeforeParens: ControlStatements SpaceBeforeRangeBasedForLoopColon: true SpaceBeforeSquareBrackets: false SpaceInEmptyBlock: false SpaceInEmptyParentheses: false SpacesBeforeTrailingComments: 2 SpacesInAngles: false SpacesInConditionalStatement: false SpacesInContainerLiterals: true SpacesInCStyleCastParentheses: false SpacesInParentheses: false SpacesInSquareBrackets: false Standard: c++17 StatementMacros: - Q_UNUSED - QT_REQUIRE_VERSION # Be consistent with indent-width, even for people who use tab for indentation! TabWidth: 2 UseTab: Never
0
rapidsai_public_repos/rapids-triton/cpp
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <string> namespace triton { namespace backend { namespace rapids { /* Function for testing rapids_triton include * * @return message indicating rapids_triton has been included succesfully*/ inline auto test_install() { return std::string("rapids_triton set up successfully"); } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/build_control.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <cstddef> namespace triton { namespace backend { namespace rapids { #ifdef TRITON_ENABLE_GPU auto constexpr IS_GPU_BUILD = true; #else auto constexpr IS_GPU_BUILD = false; #endif } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/exceptions.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #ifdef TRITON_ENABLE_GPU #include <cuda_runtime_api.h> #else #include <rapids_triton/cpu_only/cuda_runtime_replacement.hpp> #endif #include <triton/core/tritonserver.h> #include <exception> #include <rapids_triton/build_control.hpp> #include <string> namespace triton { namespace backend { namespace rapids { using ErrorCode = TRITONSERVER_Error_Code; namespace Error { auto constexpr Unknown = ErrorCode::TRITONSERVER_ERROR_UNKNOWN; auto constexpr Internal = ErrorCode::TRITONSERVER_ERROR_INTERNAL; auto constexpr NotFound = ErrorCode::TRITONSERVER_ERROR_NOT_FOUND; auto constexpr InvalidArg = ErrorCode::TRITONSERVER_ERROR_INVALID_ARG; auto constexpr Unavailable = ErrorCode::TRITONSERVER_ERROR_UNAVAILABLE; auto constexpr Unsupported = ErrorCode::TRITONSERVER_ERROR_UNSUPPORTED; auto constexpr AlreadyExists = ErrorCode::TRITONSERVER_ERROR_ALREADY_EXISTS; } // namespace Error /** * @brief Exception thrown if processing cannot continue for a request * * This exception should be thrown whenever a condition is encountered that (if * it is not appropriately handled by some other exception handler) SHOULD * result in Triton reporting an error for the request being processed. It * signals that (absent any other fallbacks), this request cannot be fulfilled * but that the server may still be in a state to continue handling other * requests, including requests to other models. */ struct TritonException : std::exception { public: TritonException() : error_(TRITONSERVER_ErrorNew(Error::Unknown, "encountered unknown error")) {} TritonException(ErrorCode code, std::string const& msg) : error_(TRITONSERVER_ErrorNew(code, msg.c_str())) { } TritonException(ErrorCode code, char const* msg) : error_{TRITONSERVER_ErrorNew(code, msg)} {} TritonException(TRITONSERVER_Error* prev_error) : error_(prev_error) {} virtual char const* what() const noexcept { return TRITONSERVER_ErrorMessage(error_); } auto* error() const { return error_; } private: TRITONSERVER_Error* error_; }; inline void triton_check(TRITONSERVER_Error* err) { if (err != nullptr) { throw TritonException(err); } } inline void cuda_check(cudaError_t const& err) { if constexpr (IS_GPU_BUILD) { if (err != cudaSuccess) { cudaGetLastError(); throw TritonException(Error::Internal, cudaGetErrorString(err)); } } else { throw TritonException(Error::Internal, "cuda_check used in non-GPU build"); } } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/utils/const_agnostic.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <type_traits> namespace triton { namespace backend { namespace rapids { template <typename T, typename U> using const_agnostic_same_t = std::enable_if_t<std::is_same_v<std::remove_const_t<T>, std::remove_const_t<U>>>; } } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/utils/narrow.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rapids_triton/exceptions.hpp> #include <type_traits> namespace triton { namespace backend { namespace rapids { template <typename T, typename F> auto narrow(F from) { auto to = static_cast<T>(from); if (static_cast<F>(to) != from || (std::is_signed<F>::value && !std::is_signed<T>::value && from < F{}) || (std::is_signed<T>::value && !std::is_signed<F>::value && to < T{}) || (std::is_signed<T>::value == std::is_signed<F>::value && ((to < T{}) != (from < F{})))) { throw TritonException(Error::Internal, "invalid narrowing"); } return to; } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/utils/device_setter.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #ifdef TRITON_ENABLE_GPU #include <cuda_runtime_api.h> #endif #include <rapids_triton/build_control.hpp> #include <rapids_triton/exceptions.hpp> namespace triton { namespace backend { namespace rapids { /** Struct for setting cuda device within a code block */ struct device_setter { device_setter(device_id_t device) : prev_device_{} { if constexpr(IS_GPU_BUILD) { cuda_check(cudaGetDevice(&prev_device_)); cuda_check(cudaSetDevice(device)); } else { throw TritonException(Error::Internal, "Device setter used in non-GPU build"); } } ~device_setter() { if constexpr(IS_GPU_BUILD) { cudaSetDevice(prev_device_); } } private: device_id_t prev_device_; }; } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/model.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/backend/backend_common.h> #include <triton/core/tritonbackend.h> #include <cstddef> #include <cstdint> #include <memory> #include <rapids_triton/exceptions.hpp> #include <string> namespace triton { namespace backend { namespace rapids { inline auto get_model_version(TRITONBACKEND_Model& model) { auto version = std::uint64_t{}; triton_check(TRITONBACKEND_ModelVersion(&model, &version)); return version; } inline auto get_model_name(TRITONBACKEND_Model& model) { auto* cname = static_cast<char const*>(nullptr); triton_check(TRITONBACKEND_ModelName(&model, &cname)); return std::string(cname); } inline auto get_model_config(TRITONBACKEND_Model& model) { auto* config_message = static_cast<TRITONSERVER_Message*>(nullptr); triton_check(TRITONBACKEND_ModelConfig(&model, 1, &config_message)); auto* buffer = static_cast<char const*>(nullptr); auto byte_size = std::size_t{}; triton_check(TRITONSERVER_MessageSerializeToJson(config_message, &buffer, &byte_size)); auto model_config = std::make_unique<common::TritonJson::Value>(); auto* err = model_config->Parse(buffer, byte_size); auto* result = TRITONSERVER_MessageDelete(config_message); if (err != nullptr) { throw(TritonException(err)); } if (result != nullptr) { throw(TritonException(result)); } return model_config; } /** * @brief Set model state (as used by Triton) to given object * * This function accepts a unique_ptr to an object derived from a Triton * BackendModel object and sets it as the stored state for a model in the * Triton server. Note that this object is not the same as a RAPIDS-Triton * "SharedModelState" object. The object that Triton expects must wrap this * SharedModelState and provide additional interface compatibility. */ template <typename ModelStateType> void set_model_state(TRITONBACKEND_Model& model, std::unique_ptr<ModelStateType>&& model_state) { triton_check(TRITONBACKEND_ModelSetState(&model, reinterpret_cast<void*>(model_state.release()))); } /** Given a model, return its associated ModelState object */ template <typename ModelStateType> auto* get_model_state(TRITONBACKEND_Model& model) { auto* vstate = static_cast<void*>(nullptr); triton_check(TRITONBACKEND_ModelState(&model, &vstate)); auto* model_state = reinterpret_cast<ModelStateType*>(vstate); return model_state; } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/model_instance.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/backend/backend_model_instance.h> #include <memory> #include <rapids_triton/exceptions.hpp> #include <rapids_triton/triton/deployment.hpp> #include <rapids_triton/triton/device.hpp> #include <string> namespace triton { namespace backend { namespace rapids { /** Get the name of a Triton model instance from the instance itself */ inline auto get_model_instance_name(TRITONBACKEND_ModelInstance& instance) { auto* cname = static_cast<char const*>(nullptr); triton_check(TRITONBACKEND_ModelInstanceName(&instance, &cname)); return std::string(cname); } /** Get the device on which a Triton model instance is loaded * * If this instance is loaded on the host, 0 will be returned. Otherwise the * GPU device id will be returned.*/ inline auto get_device_id(TRITONBACKEND_ModelInstance& instance) { auto device_id = device_id_t{}; triton_check(TRITONBACKEND_ModelInstanceDeviceId(&instance, &device_id)); return device_id; } /** Determine how a Triton model instance is deployed * * Returns enum value indicating whether the instance is deployed on device * or on the host */ inline auto get_deployment_type(TRITONBACKEND_ModelInstance& instance) { auto kind = GPUDeployment; triton_check(TRITONBACKEND_ModelInstanceKind(&instance, &kind)); return kind; } /** Return the Triton model from one of its instances */ inline auto* get_model_from_instance(TRITONBACKEND_ModelInstance& instance) { auto* model = static_cast<TRITONBACKEND_Model*>(nullptr); triton_check(TRITONBACKEND_ModelInstanceModel(&instance, &model)); return model; } /** * @brief Set Triton model instance state to given object * * This function accepts a unique_ptr to an object derived from a Triton * BackendModelInstance object and sets it as the stored state for a model in the * Triton server. Note that this object is not the same as a RAPIDS-Triton * "Model" object. The object that Triton expects must wrap this Model and * provide additional interface compatibility. */ template <typename ModelInstanceStateType> void set_instance_state(TRITONBACKEND_ModelInstance& instance, std::unique_ptr<ModelInstanceStateType>&& model_instance_state) { triton_check(TRITONBACKEND_ModelInstanceSetState( &instance, reinterpret_cast<void*>(model_instance_state.release()))); } /** Get model instance state from instance */ template <typename ModelInstanceStateType> auto* get_instance_state(TRITONBACKEND_ModelInstance& instance) { auto* instance_state = static_cast<ModelInstanceStateType*>(nullptr); triton_check( TRITONBACKEND_ModelInstanceState(&instance, reinterpret_cast<void**>(&instance_state))); return instance_state; } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/logging.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/core/tritonserver.h> #include <rapids_triton/exceptions.hpp> #include <ostream> #include <sstream> #include <string> namespace triton { namespace backend { namespace rapids { namespace { /** Log message at indicated level */ inline void log(TRITONSERVER_LogLevel level, char const* filename, int line, char const* message) { triton_check(TRITONSERVER_LogMessage(level, filename, line, message)); } } // namespace struct log_stream : public std::ostream { log_stream(TRITONSERVER_LogLevel level, char const* filename, int line) : std::ostream{}, buffer_{level, filename, line} { rdbuf(&buffer_); } log_stream(TRITONSERVER_LogLevel level) : std::ostream{}, buffer_{level, __FILE__, __LINE__} { rdbuf(&buffer_); } ~log_stream() { try { flush(); } catch (std::ios_base::failure const& ignored_err) { // Ignore error if flush fails } } private: struct log_buffer : public std::stringbuf { log_buffer(TRITONSERVER_LogLevel level, char const* filename, int line) : level_{level}, filename_{filename}, line_{line} { } virtual int sync() { auto msg = str(); if (!msg.empty()) { log(level_, filename_, line_, msg.c_str()); str(""); } return 0; } private: TRITONSERVER_LogLevel level_; char const* filename_; int line_; }; log_buffer buffer_; }; /** Log message at INFO level */ inline void log_info(char const* filename, int line, char const* message) { log(TRITONSERVER_LOG_INFO, filename, line, message); } inline void log_info(char const* filename, int line, std::string const& message) { log_info(filename, line, message.c_str()); } inline void log_info(char const* message) { log_info(__FILE__, __LINE__, message); } inline void log_info(std::string const& message) { log_info(__FILE__, __LINE__, message.c_str()); } inline auto log_info(char const* filename, int line) { return log_stream(TRITONSERVER_LOG_INFO, filename, line); } inline auto log_info() { return log_stream(TRITONSERVER_LOG_INFO); } /** Log message at WARN level */ inline void log_warn(char const* filename, int line, char const* message) { log(TRITONSERVER_LOG_WARN, filename, line, message); } inline void log_warn(char const* filename, int line, std::string const& message) { log_warn(filename, line, message.c_str()); } inline void log_warn(char const* message) { log_warn(__FILE__, __LINE__, message); } inline void log_warn(std::string const& message) { log_warn(__FILE__, __LINE__, message.c_str()); } inline auto log_warn(char const* filename, int line) { return log_stream(TRITONSERVER_LOG_WARN, filename, line); } inline auto log_warn() { return log_stream(TRITONSERVER_LOG_WARN); } /** Log message at ERROR level */ inline void log_error(char const* filename, int line, char const* message) { log(TRITONSERVER_LOG_ERROR, filename, line, message); } inline void log_error(char const* filename, int line, std::string const& message) { log_error(filename, line, message.c_str()); } inline void log_error(char const* message) { log_error(__FILE__, __LINE__, message); } inline void log_error(std::string const& message) { log_error(__FILE__, __LINE__, message.c_str()); } inline auto log_error(char const* filename, int line) { return log_stream(TRITONSERVER_LOG_ERROR, filename, line); } inline auto log_error() { return log_stream(TRITONSERVER_LOG_ERROR); } /** Log message at VERBOSE level */ inline void log_debug(char const* filename, int line, char const* message) { log(TRITONSERVER_LOG_VERBOSE, filename, line, message); } inline void log_debug(char const* filename, int line, std::string const& message) { log_debug(filename, line, message.c_str()); } inline void log_debug(char const* message) { log_debug(__FILE__, __LINE__, message); } inline void log_debug(std::string const& message) { log_debug(__FILE__, __LINE__, message.c_str()); } inline auto log_debug(char const* filename, int line) { return log_stream(TRITONSERVER_LOG_VERBOSE, filename, line); } inline auto log_debug() { return log_stream(TRITONSERVER_LOG_VERBOSE); } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/backend.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/core/tritonbackend.h> #include <cstdint> #include <rapids_triton/exceptions.hpp> #include <rapids_triton/triton/logging.hpp> #include <string> namespace triton { namespace backend { namespace rapids { inline auto get_backend_name(TRITONBACKEND_Backend& backend) { const char* cname; triton_check(TRITONBACKEND_BackendName(&backend, &cname)); return std::string(cname); } namespace { struct backend_version { std::uint32_t major; std::uint32_t minor; }; } // namespace inline auto check_backend_version(TRITONBACKEND_Backend& backend) { auto version = backend_version{}; triton_check(TRITONBACKEND_ApiVersion(&version.major, &version.minor)); log_info(__FILE__, __LINE__) << "Triton TRITONBACKEND API version: " << version.major << "." << version.minor; auto name = get_backend_name(backend); log_info(__FILE__, __LINE__) << "'" << name << "' TRITONBACKEND API version: " << TRITONBACKEND_API_VERSION_MAJOR << "." << TRITONBACKEND_API_VERSION_MINOR; return ((version.major == TRITONBACKEND_API_VERSION_MAJOR) && (version.minor >= TRITONBACKEND_API_VERSION_MINOR)); } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/triton_memory_resource.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/core/tritonbackend.h> #include <triton/core/tritonserver.h> #include <cstddef> #include <cstdint> #include <rapids_triton/exceptions.hpp> #include <rapids_triton/triton/device.hpp> #include <rmm/cuda_stream_view.hpp> #include <rmm/mr/device/device_memory_resource.hpp> #include <stdexcept> #include <utility> namespace triton { namespace backend { namespace rapids { struct triton_memory_resource final : public rmm::mr::device_memory_resource { triton_memory_resource(TRITONBACKEND_MemoryManager* manager, device_id_t device_id, rmm::mr::device_memory_resource* fallback) : manager_{manager}, device_id_{device_id}, fallback_{fallback} { } bool supports_streams() const noexcept override { return false; } bool supports_get_mem_info() const noexcept override { return false; } auto* get_triton_manager() const noexcept { return manager_; } private: TRITONBACKEND_MemoryManager* manager_; std::int64_t device_id_; rmm::mr::device_memory_resource* fallback_; void* do_allocate(std::size_t bytes, rmm::cuda_stream_view stream) override { auto* ptr = static_cast<void*>(nullptr); if (manager_ == nullptr) { ptr = fallback_->allocate(bytes, stream); } else { triton_check(TRITONBACKEND_MemoryManagerAllocate( manager_, &ptr, TRITONSERVER_MEMORY_GPU, device_id_, static_cast<std::uint64_t>(bytes))); } return ptr; } void do_deallocate(void* ptr, std::size_t bytes, rmm::cuda_stream_view stream) { if (manager_ == nullptr) { fallback_->deallocate(ptr, bytes, stream); } else { triton_check( TRITONBACKEND_MemoryManagerFree(manager_, ptr, TRITONSERVER_MEMORY_GPU, device_id_)); } } bool do_is_equal(rmm::mr::device_memory_resource const& other) const noexcept override { auto* other_triton_mr = dynamic_cast<triton_memory_resource const*>(&other); return (other_triton_mr != nullptr && other_triton_mr->get_triton_manager() == manager_); } std::pair<std::size_t, std::size_t> do_get_mem_info(rmm::cuda_stream_view stream) const override { return {0, 0}; } }; } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/model_instance_state.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/backend/backend_model_instance.h> #include <cstdint> #include <memory> #include <rapids_triton/triton/model_instance.hpp> #include <rapids_triton/triton/model_state.hpp> namespace triton { namespace backend { namespace rapids { template <typename RapidsModel, typename RapidsSharedState> struct ModelInstanceState : public BackendModelInstance { ModelInstanceState(TritonModelState<RapidsSharedState>& model_state, TRITONBACKEND_ModelInstance* triton_model_instance) : BackendModelInstance(&model_state, triton_model_instance), model_(model_state.get_shared_state(), rapids::get_device_id(*triton_model_instance), CudaStream(), Kind(), JoinPath({model_state.RepositoryPath(), std::to_string(model_state.Version()), ArtifactFilename()})) { } auto& get_model() const { return model_; } void load() { model_.load(); } void unload() { model_.unload(); } private: RapidsModel model_; }; } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/statistics.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/core/tritonbackend.h> #include <chrono> #include <cstddef> #include <rapids_triton/exceptions.hpp> namespace triton { namespace backend { namespace rapids { using time_point = std::chrono::time_point<std::chrono::steady_clock>; /** * @brief Report inference statistics for a single request * * @param instance The Triton model instance which is processing this request * @param request The Triton request object itself * @param start_time The time at which the backend first received the request * @param compute_start_time The time at which the backend began actual * inference on the request * @param compute_end_time The time at which the backend completed inference * on the request * @param end_time The time at which the backend finished all processing on * the request, including copying out results and returning a response */ inline void report_statistics(TRITONBACKEND_ModelInstance& instance, TRITONBACKEND_Request& request, time_point start_time, time_point compute_start_time, time_point compute_end_time, time_point end_time) { triton_check( TRITONBACKEND_ModelInstanceReportStatistics(&instance, &request, true, start_time.time_since_epoch().count(), compute_start_time.time_since_epoch().count(), compute_end_time.time_since_epoch().count(), end_time.time_since_epoch().count())); } /** * @brief Report inference statistics for a batch of requests of given size * * @param instance The Triton model instance which is processing this batch * @param request_count The number of requests in this batch * @param start_time The time at which the backend first received the batch * @param compute_start_time The time at which the backend began actual * inference on the batch * @param compute_end_time The time at which the backend completed inference * on the batch * @param end_time The time at which the backend finished all processing on * the batch, including copying out results and returning a response */ inline void report_statistics(TRITONBACKEND_ModelInstance& instance, std::size_t request_count, time_point start_time, time_point compute_start_time, time_point compute_end_time, time_point end_time) { triton_check( TRITONBACKEND_ModelInstanceReportBatchStatistics(&instance, request_count, start_time.time_since_epoch().count(), compute_start_time.time_since_epoch().count(), compute_end_time.time_since_epoch().count(), end_time.time_since_epoch().count())); } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/device.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <cstdint> namespace triton { namespace backend { namespace rapids { using device_id_t = std::int32_t; } } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/model_state.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/backend/backend_model.h> #include <memory> #include <rapids_triton/triton/model.hpp> namespace triton { namespace backend { namespace rapids { template <typename RapidsSharedState> struct TritonModelState : public BackendModel { TritonModelState(TRITONBACKEND_Model& triton_model) : BackendModel(&triton_model), state_{std::make_shared<RapidsSharedState>(get_model_config(triton_model))} { } void load() { state_->load(); } void unload() { state_->unload(); } auto get_shared_state() { return state_; } private: std::shared_ptr<RapidsSharedState> state_; }; } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/deployment.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/core/tritonserver.h> namespace triton { namespace backend { namespace rapids { using DeploymentType = TRITONSERVER_InstanceGroupKind; auto constexpr GPUDeployment = TRITONSERVER_INSTANCEGROUPKIND_GPU; auto constexpr CPUDeployment = TRITONSERVER_INSTANCEGROUPKIND_CPU; // Note (wphicks): We currently are not including "Auto" or "Model" because I // am not sure exactly how those would be used in context. If there is a // demand, they can be added. } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/responses.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/core/tritonbackend.h> #include <algorithm> #include <iterator> #include <rapids_triton/exceptions.hpp> #include <rapids_triton/triton/logging.hpp> #include <vector> namespace triton { namespace backend { namespace rapids { template <typename Iter> auto construct_responses(Iter requests_begin, Iter requests_end) { auto responses = std::vector<TRITONBACKEND_Response*>{}; auto requests_size = std::distance(requests_begin, requests_end); if (!(requests_size > 0)) { throw TritonException(Error::Internal, "Invalid iterators for requests when constructing responses"); } responses.reserve(requests_size); std::transform(requests_begin, requests_end, std::back_inserter(responses), [](auto* request) { auto* response = static_cast<TRITONBACKEND_Response*>(nullptr); triton_check(TRITONBACKEND_ResponseNew(&response, request)); return response; }); return responses; } template <typename Iter> void send_responses(Iter begin, Iter end, TRITONSERVER_Error* err) { std::for_each(begin, end, [err](auto& response) { decltype(err) err_copy; if (err != nullptr) { err_copy = TRITONSERVER_ErrorNew(TRITONSERVER_ErrorCode(err), TRITONSERVER_ErrorMessage(err)); } else { err_copy = err; } if (response == nullptr) { log_error(__FILE__, __LINE__) << "Failure in response collation"; } else { try { triton_check( TRITONBACKEND_ResponseSend(response, TRITONSERVER_RESPONSE_COMPLETE_FINAL, err_copy)); } catch (TritonException& err) { log_error(__FILE__, __LINE__, err.what()); } } }); } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/output.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/core/tritonbackend.h> #include <rapids_triton/exceptions.hpp> namespace triton { namespace backend { namespace rapids { inline auto* get_triton_input(TRITONBACKEND_Request* request, std::string const& name) { auto* result = static_cast<TRITONBACKEND_Input*>(nullptr); triton_check(TRITONBACKEND_RequestInput(request, name.c_str(), &result)); return result; } template <typename T, typename Iter> auto get_triton_output_shape(Iter requests_begin, Iter requests_end, std::string const& name) { auto result = std::vector<std::size_t>{}; auto reported_dtype = DType{}; auto const* input_shape = static_cast<int64_t*>(nullptr); auto input_dims = uint32_t{}; auto batch_dim = std::reduce(requests_begin, requests_end, int64_t{}, [&reported_dtype, &input_shape, &input_dims, &name](auto& request, auto total) { auto* input = get_triton_input(request, name); triton_check(TRITONBACKEND_InputProperties( input, nullptr, &reported_dtype, &input_shape, &input_dims, nullptr, nullptr)); if (reported_dtype != TritonDtype<T>::value) { auto log_stream = std::stringstream{}; log_stream << "incorrect type " << reported_dtype << " for output with required type " << TritonDtype<T>::value; throw(TritonException(Error::Internal, log_stream.str())); } if (input_dims != 0) { total += *input_shape; } return total; }); result.reserve(input_dims); std::transform(input_shape, input_shape + input_dims, std::back_inserter(result), [](auto& val) { return narrow<std::size_t>(val); }); if (!result.empty()) { result[0] = narrow<std::size_t>(batch_dim); } return result; } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/config.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <stdint.h> #include <cstddef> #include <triton/backend/backend_common.h> #include <rapids_triton/exceptions.hpp> #include <rapids_triton/utils/narrow.hpp> namespace triton { namespace backend { namespace rapids { inline auto get_max_batch_size(common::TritonJson::Value& config) { auto reported = int64_t{}; triton_check(config.MemberAsInt("max_batch_size", &reported)); return narrow<std::size_t>(reported); } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/input.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <stdint.h> #include <triton/core/tritonbackend.h> #include <algorithm> #include <numeric> #include <rapids_triton/exceptions.hpp> #include <rapids_triton/tensor/dtype.hpp> #include <rapids_triton/utils/narrow.hpp> #include <sstream> #include <string> #include <vector> namespace triton { namespace backend { namespace rapids { inline auto* get_triton_input(TRITONBACKEND_Request* request, std::string const& name) { auto result = static_cast<TRITONBACKEND_Input*>(nullptr); triton_check(TRITONBACKEND_RequestInput(request, name.c_str(), &result)); return result; } template <typename T, typename Iter> auto get_triton_input_shape(Iter requests_begin, Iter requests_end, std::string const& name) { auto result = std::vector<std::size_t>{}; auto reported_dtype = DType{}; auto const* input_shape = static_cast<int64_t*>(nullptr); auto input_dims = uint32_t{}; auto batch_dim = std::accumulate( requests_begin, requests_end, int64_t{}, [&reported_dtype, &input_shape, &input_dims, &name](auto total, auto& request) { auto* input = get_triton_input(request, name); triton_check(TRITONBACKEND_InputProperties( input, nullptr, &reported_dtype, &input_shape, &input_dims, nullptr, nullptr)); if (reported_dtype != TritonDtype<T>::value) { auto log_stream = std::stringstream{}; log_stream << "incorrect type " << reported_dtype << " for input with required type " << TritonDtype<T>::value; throw(TritonException(Error::Internal, log_stream.str())); } if (input_dims != 0) { total += *input_shape; } return total; }); result.reserve(input_dims); std::transform(input_shape, input_shape + input_dims, std::back_inserter(result), [](auto& val) { return narrow<std::size_t>(val); }); if (!result.empty()) { result[0] = narrow<std::size_t>(batch_dim); } return result; } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/requests.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <stdint.h> #include <triton/backend/backend_common.h> #include <algorithm> #include <rapids_triton/exceptions.hpp> #include <rapids_triton/triton/logging.hpp> namespace triton { namespace backend { namespace rapids { using request_size_t = uint32_t; template <typename Iter> void release_requests(Iter begin, Iter end) { std::for_each(begin, end, [](auto& request) { try { triton_check(TRITONBACKEND_RequestRelease(request, TRITONSERVER_REQUEST_RELEASE_ALL)); } catch (TritonException& err) { log_error(__FILE__, __LINE__, err.what()); } }); } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/api/execute.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/backend/backend_common.h> #include <algorithm> #include <chrono> #include <cstddef> #include <cstdint> #include <rapids_triton/batch/batch.hpp> #include <rapids_triton/exceptions.hpp> #include <rapids_triton/triton/model.hpp> #include <rapids_triton/triton/model_instance.hpp> #include <rapids_triton/triton/statistics.hpp> #include <rapids_triton/utils/narrow.hpp> #include <vector> namespace triton { namespace backend { namespace rapids { namespace triton_api { template <typename ModelState, typename ModelInstanceState> auto* execute(TRITONBACKEND_ModelInstance* instance, TRITONBACKEND_Request** raw_requests, std::size_t request_count) { auto start_time = std::chrono::steady_clock::now(); auto* result = static_cast<TRITONSERVER_Error*>(nullptr); try { auto* model_state = get_model_state<ModelState>(*get_model_from_instance(*instance)); auto* instance_state = get_instance_state<ModelInstanceState>(*instance); auto& model = instance_state->get_model(); auto max_batch_size = model.template get_config_param<std::size_t>("max_batch_size"); /* Note: It is safe to keep a reference to the model in this closure * and a pointer to the instance in the next because the batch goes * out of scope at the end of this block and Triton guarantees that * the liftimes of both the instance and model extend beyond this * function call. */ auto output_shape_fetcher = [&model](std::string const& name, Batch::size_type batch_dim) { auto result = std::vector<Batch::size_type>{}; auto config_shape = model.get_output_shape(name); if (config_shape.size() > 0 && config_shape[0] < 0) { config_shape[0] = batch_dim; } std::transform( std::begin(config_shape), std::end(config_shape), std::back_inserter(result), [](auto& coord) { if (coord < 0) { throw TritonException( Error::Internal, "Backends with variable-shape outputs must request desired output shape"); } else { return narrow<std::size_t>(coord); } }); return result; }; auto statistics_reporter = [instance](TRITONBACKEND_Request* request, time_point req_start, time_point req_comp_start, time_point req_comp_end, time_point req_end) { report_statistics(*instance, *request, req_start, req_comp_start, req_comp_end, req_end); }; auto batch = Batch(raw_requests, request_count, *(model_state->TritonMemoryManager()), std::move(output_shape_fetcher), std::move(statistics_reporter), model_state->EnablePinnedInput(), model_state->EnablePinnedOutput(), max_batch_size, model.get_stream()); if constexpr (IS_GPU_BUILD) { if (model.get_deployment_type() == GPUDeployment) { cuda_check(cudaSetDevice(model.get_device_id())); } } auto predict_err = static_cast<TRITONSERVER_Error*>(nullptr); try { model.predict(batch); } catch (TritonException& err) { predict_err = err.error(); } auto& compute_start_time = batch.compute_start_time(); auto compute_end_time = std::chrono::steady_clock::now(); batch.finalize(predict_err); auto end_time = std::chrono::steady_clock::now(); report_statistics( *instance, request_count, start_time, compute_start_time, compute_end_time, end_time); } catch (TritonException& err) { result = err.error(); } return result; } } // namespace triton_api } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/api/instance_initialize.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/backend/backend_common.h> #include <triton/backend/backend_model_instance.h> #include <rapids_triton/build_control.hpp> #include <rapids_triton/exceptions.hpp> #include <rapids_triton/triton/deployment.hpp> #include <rapids_triton/triton/logging.hpp> #include <rapids_triton/triton/model.hpp> #include <rapids_triton/triton/model_instance.hpp> namespace triton { namespace backend { namespace rapids { namespace triton_api { template <typename ModelState, typename ModelInstanceState> auto* instance_initialize(TRITONBACKEND_ModelInstance* instance) { auto* result = static_cast<TRITONSERVER_Error*>(nullptr); try { auto name = get_model_instance_name(*instance); auto device_id = get_device_id(*instance); auto deployment_type = get_deployment_type(*instance); if constexpr (!IS_GPU_BUILD) { if (deployment_type == GPUDeployment) { throw TritonException(Error::Unsupported, "KIND_GPU cannot be used in CPU-only build"); } } log_info(__FILE__, __LINE__) << "TRITONBACKEND_ModelInstanceInitialize: " << name << " (" << TRITONSERVER_InstanceGroupKindString(deployment_type) << " device " << device_id << ")"; auto* triton_model = get_model_from_instance(*instance); auto* model_state = get_model_state<ModelState>(*triton_model); if constexpr (IS_GPU_BUILD) { setup_memory_resource(device_id, model_state->TritonMemoryManager()); } auto rapids_model = std::make_unique<ModelInstanceState>(*model_state, instance); if constexpr (IS_GPU_BUILD) { auto& model = rapids_model->get_model(); if (model.get_deployment_type() == GPUDeployment) { cuda_check(cudaSetDevice(model.get_device_id())); } } rapids_model->load(); set_instance_state<ModelInstanceState>(*instance, std::move(rapids_model)); } catch (TritonException& err) { result = err.error(); } return result; } } // namespace triton_api } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/api/instance_finalize.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/backend/backend_common.h> #include <rapids_triton/exceptions.hpp> #include <rapids_triton/triton/logging.hpp> #include <rapids_triton/triton/model_instance.hpp> namespace triton { namespace backend { namespace rapids { namespace triton_api { template <typename ModelInstanceState> auto* instance_finalize(TRITONBACKEND_ModelInstance* instance) { auto* result = static_cast<TRITONSERVER_Error*>(nullptr); try { auto* instance_state = get_instance_state<ModelInstanceState>(*instance); if (instance_state != nullptr) { instance_state->unload(); log_info(__FILE__, __LINE__) << "TRITONBACKEND_ModelInstanceFinalize: delete instance state"; delete instance_state; } } catch (TritonException& err) { result = err.error(); } return result; } } // namespace triton_api } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/api/model_finalize.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/backend/backend_common.h> #include <triton/backend/backend_model.h> #include <rapids_triton/exceptions.hpp> #include <rapids_triton/triton/logging.hpp> #include <rapids_triton/triton/model.hpp> namespace triton { namespace backend { namespace rapids { namespace triton_api { template <typename ModelState> auto* model_finalize(TRITONBACKEND_Model* model) { auto* result = static_cast<TRITONSERVER_Error*>(nullptr); try { auto model_state = get_model_state<ModelState>(*model); if (model_state != nullptr) { model_state->get_shared_state()->unload(); } log_info(__FILE__, __LINE__) << "TRITONBACKEND_ModelFinalize: delete model state"; delete model_state; } catch (TritonException& err) { result = err.error(); } return result; } } // namespace triton_api } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/api/initialize.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #ifdef TRITON_ENABLE_GPU #include <cuda_runtime_api.h> #else #include <rapids_triton/cpu_only/cuda_runtime_replacement.hpp> #endif #include <triton/backend/backend_common.h> #include <rapids_triton/build_control.hpp> #include <rapids_triton/exceptions.hpp> #include <rapids_triton/memory/resource.hpp> #include <rapids_triton/triton/backend.hpp> #include <rapids_triton/triton/device.hpp> #include <rapids_triton/triton/logging.hpp> #include <string> namespace triton { namespace backend { namespace rapids { namespace triton_api { inline auto* initialize(TRITONBACKEND_Backend* backend) { auto* result = static_cast<TRITONSERVER_Error*>(nullptr); try { auto name = get_backend_name(*backend); log_info(__FILE__, __LINE__) << "TRITONBACKEND_Initialize: " << name; if (!check_backend_version(*backend)) { throw TritonException{Error::Unsupported, "triton backend API version does not support this backend"}; } if constexpr (IS_GPU_BUILD) { auto device_count = int{}; auto cuda_err = cudaGetDeviceCount(&device_count); if (device_count > 0 && cuda_err == cudaSuccess) { auto device_id = int{}; cuda_check(cudaGetDevice(&device_id)); auto* triton_manager = static_cast<TRITONBACKEND_MemoryManager*>(nullptr); triton_check(TRITONBACKEND_BackendMemoryManager(backend, &triton_manager)); setup_memory_resource(static_cast<device_id_t>(device_id), triton_manager); } } } catch (TritonException& err) { result = err.error(); } return result; } } // namespace triton_api } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/triton/api/model_initialize.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/backend/backend_common.h> #include <triton/backend/backend_model.h> #include <rapids_triton/exceptions.hpp> #include <rapids_triton/triton/logging.hpp> #include <rapids_triton/triton/model.hpp> namespace triton { namespace backend { namespace rapids { namespace triton_api { template <typename ModelState> auto* model_initialize(TRITONBACKEND_Model* model) { auto* result = static_cast<TRITONSERVER_Error*>(nullptr); try { auto name = get_model_name(*model); auto version = get_model_version(*model); log_info(__FILE__, __LINE__) << "TRITONBACKEND_ModelInitialize: " << name << " (version " << version << ")"; auto rapids_model_state = std::make_unique<ModelState>(*model); rapids_model_state->load(); set_model_state(*model, std::move(rapids_model_state)); } catch (TritonException& err) { result = err.error(); } return result; } } // namespace triton_api } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/batch/batch.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #ifdef TRITON_ENABLE_GPU #include <cuda_runtime_api.h> #else #include <rapids_triton/cpu_only/cuda_runtime_replacement.hpp> #endif #include <stdint.h> #include <triton/backend/backend_input_collector.h> #include <triton/backend/backend_output_responder.h> #include <algorithm> #include <chrono> #include <functional> #include <iterator> #include <memory> #include <numeric> #include <rapids_triton/build_control.hpp> #include <rapids_triton/exceptions.hpp> #include <rapids_triton/memory/buffer.hpp> #include <rapids_triton/memory/types.hpp> #include <rapids_triton/tensor/tensor.hpp> #include <rapids_triton/triton/device.hpp> #include <rapids_triton/triton/input.hpp> #include <rapids_triton/triton/requests.hpp> #include <rapids_triton/triton/responses.hpp> #include <rapids_triton/triton/statistics.hpp> #include <rapids_triton/utils/narrow.hpp> #include <string> #include <vector> namespace triton { namespace backend { namespace rapids { /** * @brief A representation of all data about a single batch of inference * requests * * Batch objects are the primary interface point between rapids_triton Models * and the Triton server itself. By calling the `get_input` and `get_output` * methods of a batch, Model implementations can retrieve the input Tensors * necessary for prediction and the output Tensors where results can be * stored. * * Batch objects also handle a variety of other tasks necessary for * processing a batch in the Triton model. This includes reporting statistics * on how long it took to process requests and sending responses to the * client via the Triton server once processing is complete. * * It is not recommended that developers of rapids_triton backends try to * construct Batch objects directly. Instead, you should make use of the * rapids::triton_api::execute template, which will construct the Batch for * you. */ struct Batch { using size_type = std::size_t; Batch(TRITONBACKEND_Request** raw_requests, request_size_t count, TRITONBACKEND_MemoryManager& triton_mem_manager, std::function<std::vector<size_type>(std::string const&, size_type)>&& get_output_shape, std::function<void(TRITONBACKEND_Request*, time_point const&, time_point const&, time_point const&, time_point const&)>&& report_request_statistics, bool use_pinned_input, bool use_pinned_output, size_type max_batch_size, cudaStream_t stream) : requests_{raw_requests, raw_requests + count}, responses_{construct_responses(requests_.begin(), requests_.end())}, get_output_shape_{std::move(get_output_shape)}, report_statistics_{std::move(report_request_statistics)}, collector_(raw_requests, count, &responses_, &triton_mem_manager, use_pinned_input, stream), responder_{std::make_shared<BackendOutputResponder>(raw_requests, count, &responses_, max_batch_size, &triton_mem_manager, use_pinned_output, stream)}, stream_{stream}, start_time_{std::chrono::steady_clock::now()}, compute_start_time_{std::chrono::steady_clock::now()}, batch_size_{} { } template <typename T> auto get_input_shape(std::string const& name) { auto result = std::vector<size_type>{}; if (!requests_.empty()) { result = get_triton_input_shape<T>(std::begin(requests_), std::end(requests_), name); auto input_batch_dim = size_type{}; if (result.size() > 0) { input_batch_dim = result[0]; } if (batch_size_.has_value()) { if (batch_size_.value() != input_batch_dim) { throw TritonException(Error::Internal, "all input tensors must have same batch dimension"); } } else { batch_size_ = input_batch_dim; } } return result; } template <typename T> auto get_input(std::string const& name, std::optional<MemoryType> const& memory_type, device_id_t device_id, cudaStream_t stream) { auto shape = get_input_shape<T>(name); auto size_bytes = sizeof(T) * std::reduce(shape.begin(), shape.end(), std::size_t{1}, std::multiplies<>()); auto allowed_memory_configs = std::vector<std::pair<MemoryType, int64_t>>{}; if (memory_type.has_value()) { allowed_memory_configs.emplace_back(memory_type.value(), device_id); } else { allowed_memory_configs.emplace_back(HostMemory, int64_t{}); allowed_memory_configs.emplace_back(DeviceMemory, device_id); } auto const* raw_buffer = static_cast<char*>(nullptr); auto reported_bytes = std::size_t{}; auto reported_mem_type = MemoryType{}; auto reported_device_id = int64_t{}; triton_check( collector_.ProcessTensor(name.c_str(), static_cast<char*>(nullptr), // Return data without copy if possible size_bytes, allowed_memory_configs, &raw_buffer, &reported_bytes, &reported_mem_type, &reported_device_id)); if(collector_.Finalize()){ if constexpr (IS_GPU_BUILD) { cuda_check(cudaStreamSynchronize(stream_)); } else { throw TritonException(Error::Internal, "stream synchronization required in non-GPU build"); } } std::for_each(std::begin(responses_), std::end(responses_), [](auto* response) { if (response == nullptr) { throw TritonException(Error::Internal, "Input collection failed"); } }); auto buffer = Buffer(reinterpret_cast<T*>(raw_buffer), reported_bytes / sizeof(T), reported_mem_type, reported_device_id, stream); if (memory_type && (reported_mem_type != memory_type || reported_device_id != device_id)) { throw TritonException(Error::Internal, "data collected in wrong location"); } // Set start time of batch to time latest input tensor was retrieved compute_start_time_ = std::chrono::steady_clock::now(); return Tensor(std::move(shape), std::move(buffer)); } template <typename T> auto get_input(std::string const& name, std::optional<MemoryType> const& memory_type, device_id_t device_id) { return get_input<T>(name, memory_type, device_id, stream_); } template <typename T> auto get_output(std::string const& name, std::optional<MemoryType> const& memory_type, device_id_t device_id, cudaStream_t stream) { if (!batch_size_.has_value()) { throw TritonException(Error::Internal, "At least one input must be retrieved before any output"); } auto shape = get_output_shape_(name, batch_size_.value()); auto buffer_size = std::reduce(shape.begin(), shape.end(), std::size_t{1}, std::multiplies<>()); auto final_memory_type = MemoryType{}; if (memory_type.has_value()) { final_memory_type = memory_type.value(); } else { // If consumer doesn't care, use HostMemory to avoid additional copy on // non-shared-memory responses. final_memory_type = HostMemory; } auto buffer = Buffer<T>(buffer_size, final_memory_type, device_id, stream); return OutputTensor<T>(std::move(shape), std::move(buffer), name, responder_); } template <typename T> auto get_output(std::string const& name, std::optional<MemoryType> const& memory_type, device_id_t device_id) { return get_output<T>(name, memory_type, device_id, stream_); } auto const& compute_start_time() const { return compute_start_time_; } auto stream() const { return stream_; } void finalize(TRITONSERVER_Error* err) { auto compute_end_time = std::chrono::steady_clock::now(); if (responder_->Finalize()) { cuda_check(cudaStreamSynchronize(stream_)); } send_responses(std::begin(responses_), std::end(responses_), err); // Triton resumes ownership of failed requests; only release on success if (err == nullptr) { std::for_each( std::begin(requests_), std::end(requests_), [this, &compute_end_time](auto& request) { report_statistics_(request, start_time_, compute_start_time_, compute_end_time, std::chrono::steady_clock::now()); }); release_requests(std::begin(requests_), std::end(requests_)); } } private: std::vector<TRITONBACKEND_Request*> requests_; std::vector<TRITONBACKEND_Response*> responses_; std::function<std::vector<size_type>(std::string const&, size_type)> get_output_shape_; std::function<void(TRITONBACKEND_Request*, time_point const&, time_point const&, time_point const&, time_point const&)> report_statistics_; BackendInputCollector collector_; std::shared_ptr<BackendOutputResponder> responder_; cudaStream_t stream_; std::chrono::time_point<std::chrono::steady_clock> start_time_; std::chrono::time_point<std::chrono::steady_clock> compute_start_time_; std::optional<size_type> batch_size_; }; } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/cpu_only/cuda_runtime_replacement.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #ifdef TRITON_ENABLE_GPU #include <cuda_runtime_api.h> #else namespace triton { namespace backend { namespace rapids { using cudaStream_t = void*; enum struct cudaError_t {cudaSuccess, cudaErrorNonGpuBuild}; using cudaError = cudaError_t; auto constexpr cudaSuccess = cudaError_t::cudaSuccess; inline void cudaGetLastError() {} inline auto const * cudaGetErrorString(cudaError_t err) { return "CUDA function used in non-GPU build"; } inline auto cudaStreamSynchronize(cudaStream_t stream) { return cudaError_t::cudaErrorNonGpuBuild; } inline auto cudaGetDevice(int* device_id) { return cudaError_t::cudaErrorNonGpuBuild; } inline auto cudaGetDeviceCount(int* count) { return cudaError_t::cudaErrorNonGpuBuild; } } // namespace rapids } // namespace backend } // namespace triton #endif
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/memory/types.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/core/tritonserver.h> namespace triton { namespace backend { namespace rapids { using MemoryType = TRITONSERVER_MemoryType; auto constexpr DeviceMemory = TRITONSERVER_MEMORY_GPU; auto constexpr HostMemory = TRITONSERVER_MEMORY_CPU; } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/memory/resource.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/core/tritonbackend.h> #include <rapids_triton/build_control.hpp> #include <rapids_triton/memory/detail/resource.hpp> #include <rapids_triton/triton/device.hpp> #ifdef TRITON_ENABLE_GPU #include <rapids_triton/memory/detail/gpu_only/resource.hpp> #else #include <rapids_triton/memory/detail/cpu_only/resource.hpp> #endif namespace triton { namespace backend { namespace rapids { inline void setup_memory_resource(device_id_t device_id, TRITONBACKEND_MemoryManager* triton_manager = nullptr) { detail::setup_memory_resource<IS_GPU_BUILD>(device_id, triton_manager); } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/memory/buffer.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <cstddef> #include <memory> #include <stdexcept> #include <variant> #ifdef TRITON_ENABLE_GPU #include <cuda_runtime_api.h> #include <rapids_triton/memory/detail/gpu_only/copy.hpp> #include <rapids_triton/memory/detail/gpu_only/owned_device_buffer.hpp> #else #include <rapids_triton/cpu_only/cuda_runtime_replacement.hpp> #include <rapids_triton/memory/detail/cpu_only/copy.hpp> #include <rapids_triton/memory/detail/cpu_only/owned_device_buffer.hpp> #endif #include <rapids_triton/build_control.hpp> #include <rapids_triton/exceptions.hpp> #include <rapids_triton/memory/resource.hpp> #include <rapids_triton/memory/types.hpp> #include <rapids_triton/triton/device.hpp> #include <rapids_triton/triton/logging.hpp> namespace triton { namespace backend { namespace rapids { template <typename T> struct Buffer { using size_type = std::size_t; using value_type = T; using h_buffer = T*; using d_buffer = T*; using owned_h_buffer = std::unique_ptr<T[]>; using owned_d_buffer = detail::owned_device_buffer<T, IS_GPU_BUILD>; using data_store = std::variant<h_buffer, d_buffer, owned_h_buffer, owned_d_buffer>; Buffer() noexcept : device_{}, data_{std::in_place_index<0>, nullptr}, size_{}, stream_{} {} /** * @brief Construct buffer of given size in given memory location (either * on host or on device) * A buffer constructed in this way is owning and will release allocated * resources on deletion */ Buffer(size_type size, MemoryType memory_type = DeviceMemory, device_id_t device = 0, cudaStream_t stream = 0) : device_{device}, data_{allocate(size, device, memory_type, stream)}, size_{size}, stream_{stream} { if constexpr (!IS_GPU_BUILD) { if (memory_type == DeviceMemory) { throw TritonException( Error::Internal, "Cannot use device buffer in non-GPU build" ); } } } /** * @brief Construct buffer from given source in given memory location (either * on host or on device) * A buffer constructed in this way is non-owning; the caller is * responsible for freeing any resources associated with the input pointer */ Buffer(T* input_data, size_type size, MemoryType memory_type = DeviceMemory, device_id_t device = 0, cudaStream_t stream = 0) : device_{device}, data_{[&memory_type, &input_data]() { auto result = data_store{}; if (memory_type == HostMemory) { result = data_store{std::in_place_index<0>, input_data}; } else { if constexpr (!IS_GPU_BUILD) { throw TritonException( Error::Internal, "Cannot use device buffer in non-GPU build" ); } result = data_store{std::in_place_index<1>, input_data}; } return result; }()}, size_{size}, stream_{stream} { } /** * @brief Construct one buffer from another in the given memory location * (either on host or on device) * A buffer constructed in this way is owning and will copy the data from * the original location */ Buffer(Buffer<T> const& other, MemoryType memory_type, device_id_t device = 0) : device_{device}, data_([&other, &memory_type, &device]() { auto result = allocate(other.size_, device, memory_type, other.stream_); copy(result, other.data_, other.size_, other.stream_); return result; }()), size_{other.size_}, stream_{other.stream_} { } /** * @brief Create owning copy of existing buffer * The memory type of this new buffer will be the same as the original */ Buffer(Buffer<T> const& other) : Buffer(other, other.mem_type(), other.device()) {} Buffer(Buffer<T>&& other, MemoryType memory_type) : device_{other.device()}, data_{[&other, memory_type]() { data_store result; if (memory_type == other.mem_type()) { result = std::move(other.data_); } else { result = allocate(other.size_, memory_type, other.device(), other.stream()); copy(result, other.data_, other.size_, other.stream_); } return result; }()}, size_{other.size_}, stream_{other.stream_} { } Buffer(Buffer<T>&& other) = default; Buffer<T>& operator=(Buffer<T>&& other) = default; ~Buffer() {} /** * @brief Return where memory for this buffer is located (host or device) */ auto mem_type() const noexcept { return data_.index() % 2 == 0 ? HostMemory : DeviceMemory; } /** * @brief Return number of elements in buffer */ auto size() const noexcept { return size_; } /** * @brief Return pointer to data stored in buffer */ auto* data() const noexcept { return get_raw_ptr(data_); } auto device() const noexcept { return device_; } /** * @brief Return CUDA stream associated with this buffer */ auto stream() const noexcept { return stream_; } void stream_synchronize() const { if constexpr (IS_GPU_BUILD) { cuda_check(cudaStreamSynchronize(stream_)); } } /** * @brief Set CUDA stream for this buffer to new value * * @warning This method calls cudaStreamSynchronize on the old stream * before updating. Be aware of performance implications and try to avoid * interactions between buffers on different streams where possible. */ void set_stream(cudaStream_t new_stream) { stream_synchronize(); stream_ = new_stream; } private: device_id_t device_; data_store data_; size_type size_; cudaStream_t stream_; // Helper function for accessing raw pointer to underlying data of // data_store static auto* get_raw_ptr(data_store const& ptr) noexcept { /* Switch statement is an optimization relative to std::visit to avoid * vtable overhead for a small number of alternatives */ auto* result = static_cast<T*>(nullptr); switch (ptr.index()) { case 0: result = std::get<0>(ptr); break; case 1: result = std::get<1>(ptr); break; case 2: result = std::get<2>(ptr).get(); break; case 3: result = std::get<3>(ptr).get(); break; } return result; } // Helper function for allocating memory in constructors static auto allocate(size_type size, device_id_t device = 0, MemoryType memory_type = DeviceMemory, cudaStream_t stream = 0) { auto result = data_store{}; if (memory_type == DeviceMemory) { if constexpr (IS_GPU_BUILD) { result = data_store{owned_d_buffer{ device, size, stream, }}; } else { throw TritonException(Error::Internal, "DeviceMemory requested in CPU-only build of FIL backend"); } } else { result = std::make_unique<T[]>(size); } return result; } // Helper function for copying memory in constructors, where there are // stronger guarantees on conditions that would otherwise need to be // checked static void copy(data_store const& dst, data_store const& src, size_type len, cudaStream_t stream) { // This function will only be called in constructors, so we allow a // const_cast here to perform the initial copy of data from a // Buffer<T const> to a newly-created Buffer<T const> auto raw_dst = const_cast<std::remove_const_t<T>*>(get_raw_ptr(dst)); auto raw_src = get_raw_ptr(src); auto dst_mem_type = dst.index() % 2 == 0 ? HostMemory : DeviceMemory; auto src_mem_type = src.index() % 2 == 0 ? HostMemory : DeviceMemory; detail::copy(raw_dst, raw_src, len, stream, dst_mem_type, src_mem_type); } }; /** * @brief Copy data from one Buffer to another * * @param dst The destination buffer * @param src The source buffer * @param dst_begin The offset from the beginning of the destination buffer * at which to begin copying to. * @param src_begin The offset from the beginning of the source buffer * at which to begin copying from. * @param src_end The offset from the beginning of the source buffer * before which to end copying from. */ template <typename T, typename U> void copy(Buffer<T>& dst, Buffer<U> const& src, typename Buffer<T>::size_type dst_begin, typename Buffer<U>::size_type src_begin, typename Buffer<U>::size_type src_end) { if (dst.stream() != src.stream()) { dst.set_stream(src.stream()); } auto len = src_end - src_begin; if (len < 0 || src_end > src.size() || len > dst.size() - dst_begin) { throw TritonException(Error::Internal, "bad copy between buffers"); } auto raw_dst = dst.data() + dst_begin; auto raw_src = src.data() + src_begin; detail::copy(raw_dst, raw_src, len, dst.stream(), dst.mem_type(), src.mem_type()); } template <typename T, typename U> void copy(Buffer<T>& dst, Buffer<U> const& src) { copy(dst, src, 0, 0, src.size()); } template <typename T, typename U> void copy(Buffer<T>& dst, Buffer<U> const& src, typename Buffer<T>::size_type dst_begin) { copy(dst, src, dst_begin, 0, src.size()); } template <typename T, typename U> void copy(Buffer<T>& dst, Buffer<U> const& src, typename Buffer<U>::size_type src_begin, typename Buffer<U>::size_type src_end) { copy(dst, src, 0, src_begin, src_end); } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/memory
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/memory/detail/owned_device_buffer.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once namespace triton { namespace backend { namespace rapids { namespace detail { template<typename T, bool enable_gpu> struct owned_device_buffer { }; } // namespace detail } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/memory
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/memory/detail/resource.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/core/tritonbackend.h> #include <rapids_triton/triton/device.hpp> namespace triton { namespace backend { namespace rapids { namespace detail { template<bool enable_gpu> inline void setup_memory_resource(device_id_t device_id, TRITONBACKEND_MemoryManager* triton_manager = nullptr) { } } // namespace detail } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/memory/detail
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/memory/detail/cpu_only/owned_device_buffer.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <cstddef> #include <type_traits> #include <rapids_triton/cpu_only/cuda_runtime_replacement.hpp> #include <rapids_triton/exceptions.hpp> #include <rapids_triton/memory/detail/owned_device_buffer.hpp> #include <rapids_triton/triton/device.hpp> namespace triton { namespace backend { namespace rapids { namespace detail { template<typename T> struct owned_device_buffer<T, false> { using non_const_T = std::remove_const_t<T>; owned_device_buffer(device_id_t device_id, std::size_t size, cudaStream_t stream) { throw TritonException(Error::Internal, "Attempted to use device buffer in non-GPU build"); } auto* get() const { return static_cast<T*>(nullptr); } }; } // namespace detail } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/memory/detail
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/memory/detail/cpu_only/copy.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <cstddef> #include <cstring> #ifndef TRITON_ENABLE_GPU #include <rapids_triton/cpu_only/cuda_runtime_replacement.hpp> #endif #include <rapids_triton/exceptions.hpp> #include <rapids_triton/memory/types.hpp> namespace triton { namespace backend { namespace rapids { namespace detail { template <typename T> void copy(T* dst, T const* src, std::size_t len, cudaStream_t stream, MemoryType dst_type, MemoryType src_type) { if (dst_type == DeviceMemory || src_type == DeviceMemory) { throw TritonException(Error::Internal, "Cannot copy device memory in non-GPU build"); } else { std::memcpy(dst, src, len * sizeof(T)); } } } // namespace detail } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/memory/detail
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/memory/detail/cpu_only/resource.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/core/tritonbackend.h> #include <rapids_triton/memory/detail/resource.hpp> #include <rapids_triton/triton/device.hpp> namespace triton { namespace backend { namespace rapids { namespace detail { template<> inline void setup_memory_resource<false>(device_id_t device_id, TRITONBACKEND_MemoryManager* triton_manager) { } } // namespace detail } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/memory/detail
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/memory/detail/gpu_only/owned_device_buffer.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <cstddef> #include <rapids_triton/memory/detail/owned_device_buffer.hpp> #include <rapids_triton/triton/device.hpp> #include <rapids_triton/utils/device_setter.hpp> #include <rmm/device_buffer.hpp> namespace triton { namespace backend { namespace rapids { namespace detail { template<typename T> struct owned_device_buffer<T, true> { using non_const_T = std::remove_const_t<T>; owned_device_buffer(device_id_t device_id, std::size_t size, cudaStream_t stream) : data_{[&device_id, &size, &stream]() { auto device_context = device_setter{device_id}; return rmm::device_buffer{size * sizeof(T), rmm::cuda_stream_view{stream}}; }()} { } auto* get() const { return reinterpret_cast<T*>(data_.data()); } private: mutable rmm::device_buffer data_; }; } // namespace detail } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/memory/detail
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/memory/detail/gpu_only/copy.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #ifdef TRITON_ENABLE_GPU #include <cuda_runtime_api.h> #endif #include <cstddef> #include <cstring> #include <rapids_triton/memory/types.hpp> #include <rapids_triton/exceptions.hpp> namespace triton { namespace backend { namespace rapids { namespace detail { template <typename T> void copy(T* dst, T const* src, std::size_t len, cudaStream_t stream, MemoryType dst_type, MemoryType src_type) { if (dst_type == DeviceMemory || src_type == DeviceMemory) { cuda_check(cudaMemcpyAsync(dst, src, len, cudaMemcpyDefault, stream)); } else { std::memcpy(dst, src, len * sizeof(T)); } } } // namespace detail } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/memory/detail
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/memory/detail/gpu_only/resource.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/core/tritonbackend.h> #include <deque> #include <memory> #include <mutex> #include <rapids_triton/exceptions.hpp> #include <rapids_triton/memory/detail/resource.hpp> #include <rapids_triton/triton/device.hpp> #include <rapids_triton/triton/triton_memory_resource.hpp> #include <rmm/cuda_device.hpp> #include <rmm/mr/device/cuda_memory_resource.hpp> #include <rmm/mr/device/per_device_resource.hpp> namespace triton { namespace backend { namespace rapids { namespace detail { inline auto& resource_lock() { static auto lock = std::mutex{}; return lock; } /** A struct used solely to keep memory resources in-scope for the lifetime * of the backend */ struct resource_data { resource_data() : base_mr_{}, triton_mrs_{} {} auto* make_new_resource(device_id_t device_id, TRITONBACKEND_MemoryManager* manager) { if (manager == nullptr && triton_mrs_.size() != 0) { manager = triton_mrs_.back().get_triton_manager(); } triton_mrs_.emplace_back(manager, device_id, &base_mr_); return &(triton_mrs_.back()); } private: rmm::mr::cuda_memory_resource base_mr_; std::deque<triton_memory_resource> triton_mrs_; }; inline auto& get_device_resources() { static auto device_resources = resource_data{}; return device_resources; } inline auto is_triton_resource(rmm::cuda_device_id const& device_id) { auto* triton_mr = dynamic_cast<triton_memory_resource*>(rmm::mr::get_per_device_resource(device_id)); return (triton_mr != nullptr && triton_mr->get_triton_manager() != nullptr); } template<> inline void setup_memory_resource<true>(device_id_t device_id, TRITONBACKEND_MemoryManager* triton_manager) { auto lock = std::lock_guard<std::mutex>{detail::resource_lock()}; auto rmm_device_id = rmm::cuda_device_id{device_id}; if (!detail::is_triton_resource(rmm_device_id)) { auto& device_resources = detail::get_device_resources(); rmm::mr::set_per_device_resource(rmm_device_id, device_resources.make_new_resource(device_id, triton_manager)); } } /* inline auto* get_memory_resource(device_id_t device_id) { auto rmm_device_id = rmm::cuda_device_id{device_id}; return rmm::mr::get_per_device_resource(rmm_device_id); } inline auto* get_memory_resource() { return rmm::mr::get_current_device_resource(); } */ } // namespace detail } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/model/model.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #ifdef TRITON_ENABLE_GPU #include <cuda_runtime_api.h> #else #include <rapids_triton/cpu_only/cuda_runtime_replacement.hpp> #endif #include <cstddef> #include <rapids_triton/batch/batch.hpp> #include <rapids_triton/memory/resource.hpp> #include <rapids_triton/model/shared_state.hpp> #include <rapids_triton/tensor/tensor.hpp> #include <rapids_triton/triton/deployment.hpp> #include <rapids_triton/triton/device.hpp> #include <rapids_triton/utils/narrow.hpp> #include <string> #include <vector> namespace triton { namespace backend { namespace rapids { template <typename SharedState = SharedModelState> struct Model { virtual void predict(Batch& batch) const = 0; virtual void load() {} virtual void unload() {} /** * @brief Return the preferred memory type in which to store data for this * batch or std::nullopt to accept whatever Triton returns * * The base implementation of this method will require data on-host if the * model itself is deployed on the host OR if this backend has not been * compiled with GPU support. Otherwise, models deployed on device will * receive memory on device. Overriding this method will allow derived * model classes to select a preferred memory location based on properties * of the batch or to simply return std::nullopt if device memory or host * memory will do equally well. */ virtual std::optional<MemoryType> preferred_mem_type(Batch& batch) const { return (IS_GPU_BUILD && deployment_type_ == GPUDeployment) ? DeviceMemory : HostMemory; } virtual std::optional<MemoryType> preferred_mem_type_in(Batch& batch) const { return preferred_mem_type(batch); } virtual std::optional<MemoryType> preferred_mem_type_out(Batch& batch) const { return preferred_mem_type(batch); } /** * @brief Retrieve a stream used to set up batches for this model * * The base implementation of this method simply returns the default stream * provided by Triton for use with this model. Child classes may choose to * override this in order to provide different streams for use with * successive incoming batches. For instance, one might cycle through * several streams in order to distribute batches across them, but care * should be taken to ensure proper synchronization in this case. */ virtual cudaStream_t get_stream() const { return default_stream_; } /** * @brief Get input tensor of a particular named input for an entire batch */ template <typename T> auto get_input(Batch& batch, std::string const& name, std::optional<MemoryType> const& mem_type, cudaStream_t stream) const { return batch.get_input<T const>(name, mem_type, device_id_, stream); } template <typename T> auto get_input(Batch& batch, std::string const& name, std::optional<MemoryType> const& mem_type) const { return get_input<T>(batch, name, mem_type, default_stream_); } template <typename T> auto get_input(Batch& batch, std::string const& name) const { return get_input<T>(batch, name, preferred_mem_type(batch), default_stream_); } /** * @brief Get output tensor of a particular named output for an entire batch */ template <typename T> auto get_output(Batch& batch, std::string const& name, std::optional<MemoryType> const& mem_type, device_id_t device_id, cudaStream_t stream) const { return batch.get_output<T>(name, mem_type, device_id, stream); } template <typename T> auto get_output(Batch& batch, std::string const& name, std::optional<MemoryType> const& mem_type, cudaStream_t stream) const { return get_output<T>(batch, name, mem_type, device_id_, stream); } template <typename T> auto get_output(Batch& batch, std::string const& name, std::optional<MemoryType> const& mem_type) const { return get_output<T>(batch, name, mem_type, device_id_, default_stream_); } template <typename T> auto get_output(Batch& batch, std::string const& name) const { return get_output<T>(batch, name, preferred_mem_type(batch), device_id_, default_stream_); } /** * @brief Retrieve value of configuration parameter */ template <typename T> auto get_config_param(std::string const& name) const { return shared_state_->template get_config_param<T>(name); } template <typename T> auto get_config_param(std::string const& name, T default_value) const { return shared_state_->template get_config_param<T>(name, default_value); } template <typename T> auto get_config_param(char const* name) const { return get_config_param<T>(std::string(name)); } template <typename T> auto get_config_param(char const* name, T default_value) const { return get_config_param<T>(std::string(name), default_value); } Model(std::shared_ptr<SharedState> shared_state, device_id_t device_id, cudaStream_t default_stream, DeploymentType deployment_type, std::string const& filepath) : shared_state_{shared_state}, device_id_{device_id}, default_stream_{default_stream}, deployment_type_{deployment_type}, filepath_{filepath} { if constexpr (IS_GPU_BUILD) { setup_memory_resource(device_id_); } } auto get_device_id() const { return device_id_; } auto get_deployment_type() const { return deployment_type_; } auto const& get_filepath() const { return filepath_; } auto get_output_shape(std::string const& name) const { return shared_state_->get_output_shape(name); } protected: auto get_shared_state() const { return shared_state_; } private: std::shared_ptr<SharedState> shared_state_; device_id_t device_id_; cudaStream_t default_stream_; DeploymentType deployment_type_; std::string filepath_; }; } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/model/shared_state.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #ifdef TRITON_ENABLE_GPU #include <cuda_runtime_api.h> #else #include <rapids_triton/cpu_only/cuda_runtime_replacement.hpp> #endif #include <algorithm> #include <cstddef> #include <optional> #include <sstream> #include <string> #include <utility> #include <vector> #include <triton/backend/backend_common.h> #include <rapids_triton/batch/batch.hpp> #include <rapids_triton/tensor/tensor.hpp> #include <rapids_triton/triton/config.hpp> #include <rapids_triton/triton/deployment.hpp> #include <rapids_triton/utils/narrow.hpp> namespace triton { namespace backend { namespace rapids { /** * @brief Stores shared state for multiple instances of the same model */ struct SharedModelState { virtual void load() {} virtual void unload() {} explicit SharedModelState(std::unique_ptr<common::TritonJson::Value>&& config, bool squeeze_output = false) : config_{std::move(config)}, max_batch_size_{get_max_batch_size(*config_)}, output_shapes_([this, squeeze_output]() { auto result = std::vector<std::pair<std::string, std::vector<std::int64_t>>>{}; auto output_entries = triton::common::TritonJson::Value{}; triton_check(config_->MemberAsArray("output", &output_entries)); result.reserve(output_entries.ArraySize()); // Using a raw loop because TritonJSON::Value access has no iterator interface for (std::size_t i = 0; i < output_entries.ArraySize(); ++i) { auto output_entry = triton::common::TritonJson::Value{}; triton_check(output_entries.IndexAsObject(i, &output_entry)); auto name = std::string{}; triton_check(output_entry.MemberAsString("name", &name)); auto shape = std::vector<std::int64_t>{}; auto reshape_entry = triton::common::TritonJson::Value{}; if (output_entry.Find("reshape", &reshape_entry)) { ParseShape(reshape_entry, "shape", &shape); } else { ParseShape(output_entry, "dims", &shape); } if (shape[0] != -1) { shape.insert(shape.begin(), -1); } // The squeeze_output option was introduced to handle a bad choice of // convention in the original FIL backend implementation. For legacy // compatibility, we introduced this option into RAPIDS-Triton, but // in general, new backends are advised to avoid using it and defer // this sort of flattening operation to the consumer. if (squeeze_output) { shape.erase(std::remove(shape.begin(), shape.end(), std::int64_t{1}), shape.end()); } result.insert( std::upper_bound(std::begin(output_shapes_), std::end(output_shapes_), name, [](auto& value, auto& entry) { return value < entry.first; }), {name, shape}); } return result; }()) { } template <typename T> auto get_config_param(std::string const& name) { return get_config_param<T>(name, std::optional<T>{}); } template <typename T> auto get_config_param(std::string const& name, T default_value) { return get_config_param<T>(name, std::make_optional(default_value)); } auto get_output_shape(std::string const& name) const { auto cached_shape = std::lower_bound( std::begin(output_shapes_), std::end(output_shapes_), name, [](auto& entry, auto& value) { return entry.first < value; }); if (cached_shape == std::end(output_shapes_) || name != cached_shape->first) { auto log_stream = std::stringstream{}; log_stream << "No output with name " << name << " in configuration."; throw TritonException(Error::Internal, log_stream.str()); } else { return cached_shape->second; } } auto get_output_names() const { auto output_names = std::vector<std::string>{}; output_names.reserve(output_shapes_.size()); std::transform(std::begin(output_shapes_), std::end(output_shapes_), std::back_inserter(output_names), [](auto& output_shape) { return output_shape.first; }); return output_names; } auto check_output_name(std::string const& name) const { // #TODO: Figure out a way to use std::binary_search here auto cached_shape = std::lower_bound( std::begin(output_shapes_), std::end(output_shapes_), name, [](auto& entry, auto& value) { return entry.first < value; }); return cached_shape != std::end(output_shapes_) && name == cached_shape->first; } private: std::unique_ptr<common::TritonJson::Value> config_; Batch::size_type max_batch_size_; std::vector<std::pair<std::string, std::vector<std::int64_t>>> mutable output_shapes_; template <typename T> auto get_config_param(std::string const& name, std::optional<T> const& default_value) { auto result = T{}; if (name == std::string("max_batch_size")) { result = max_batch_size_; return result; } auto parameters = common::TritonJson::Value{}; auto json_value = common::TritonJson::Value{}; if (config_->Find("parameters", &parameters) && parameters.Find(name.c_str(), &json_value)) { auto string_repr = std::string{}; triton_check(json_value.MemberAsString("string_value", &string_repr)); auto input_stream = std::istringstream{string_repr}; if constexpr (std::is_same_v<T, bool>) { if (string_repr == "true") { result = true; } else if (string_repr == "false") { result = false; } else { throw TritonException( Error::Internal, "Expected 'true' or 'false' for parameter '" + name + "', got: '" + string_repr + "'"); } } else { input_stream >> result; } if (input_stream.fail()) { if (default_value) { result = *default_value; } else { throw TritonException(Error::InvalidArg, std::string("Bad input for parameter ") + name); } } } else { if (default_value) { result = *default_value; } else { throw TritonException( Error::InvalidArg, std::string("Required parameter ") + name + std::string(" not found in config")); } } return result; } }; } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/tensor/tensor.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <algorithm> #include <iterator> #include <memory> #include <numeric> #include <string> #include <type_traits> #include <utility> #include <vector> #ifdef TRITON_ENABLE_GPU #include <cuda_runtime_api.h> #else #include <rapids_triton/cpu_only/cuda_runtime_replacement.hpp> #endif #include <triton/backend/backend_output_responder.h> #include <rapids_triton/build_control.hpp> #include <rapids_triton/exceptions.hpp> #include <rapids_triton/memory/buffer.hpp> #include <rapids_triton/tensor/dtype.hpp> #include <rapids_triton/triton/device.hpp> #include <rapids_triton/utils/narrow.hpp> namespace triton { namespace backend { namespace rapids { template <typename T> struct BaseTensor { using size_type = typename Buffer<T>::size_type; BaseTensor() : shape_{}, buffer_{} {} BaseTensor(std::vector<size_type> const& shape, Buffer<T>&& buffer) : shape_(shape), buffer_{std::move(buffer)} { } virtual ~BaseTensor() = 0; /** * @brief Construct a BaseTensor from a collection of buffers * * Given a collection of buffers, collate them all into one buffer stored in * a new BaseTensor */ template <typename Iter> BaseTensor(std::vector<size_type> const& shape, Iter begin, Iter end, MemoryType mem_type, device_id_t device, cudaStream_t stream) : shape_(shape), buffer_([&begin, &end, &mem_type, &device, &stream]() { auto total_size = std::transform_reduce( begin, end, size_type{}, std::plus<>{}, [](auto&& buffer) { return buffer.size(); }); auto result = Buffer<T>(total_size, mem_type, device, stream); std::accumulate(begin, end, size_type{}, [&result](auto offset, auto& buffer) { copy(result, buffer, offset); return offset + buffer.size(); }); return result; }()) { } auto const& shape() const { return shape_; } auto size() const { return buffer_.size(); } auto data() const { return buffer_.data(); } auto& buffer() { return buffer_; } auto constexpr dtype() { return TritonDtype<T>::value; } auto mem_type() const { return buffer_.mem_type(); } auto stream() const { return buffer_.stream(); } auto device() const { return buffer_.device(); } void stream_synchronize() const { if (mem_type() == DeviceMemory) { buffer_.stream_synchronize(); } } void set_stream(cudaStream_t new_stream) { buffer_.set_stream(new_stream); } private: std::vector<size_type> shape_; Buffer<T> buffer_; }; template <typename T> BaseTensor<T>::~BaseTensor() { } template <typename T> struct Tensor final : BaseTensor<T> { Tensor() : BaseTensor<T>{} {} Tensor(std::vector<typename BaseTensor<T>::size_type> const& shape, Buffer<T>&& buffer) : BaseTensor<T>(shape, std::move(buffer)) { } template <typename Iter> Tensor(std::vector<typename BaseTensor<T>::size_type> const& shape, Iter begin, Iter end, MemoryType mem_type, device_id_t device, cudaStream_t stream) : BaseTensor<T>(shape, begin, end, mem_type, device, stream) { } }; template <typename T> struct OutputTensor final : BaseTensor<T> { OutputTensor(std::vector<typename BaseTensor<T>::size_type>&& shape, Buffer<T>&& buffer, std::string const& name, std::shared_ptr<BackendOutputResponder> responder) : BaseTensor<T>(std::move(shape), std::move(buffer)), name_{name}, responder_{responder} { } /** * @brief Prepare final output data from this tensor for responding to * request * * This method *must* be called by rapids_triton backends on all of their * output tensors before returning from their `predict` methods. Because we * cannot know a priori what names backends might have for their tensors * and what types will be stored in those tensors, the rapids_triton * library cannot store references to those tensors that might otherwise be * used to finalize them. */ void finalize() { auto& shape = BaseTensor<T>::shape(); auto triton_shape = std::vector<std::int64_t>{}; triton_shape.reserve(shape.size()); std::transform( std::begin(shape), std::end(shape), std::back_inserter(triton_shape), [](auto& val) { return narrow<int64_t>(val); }); // Must call the following because BackendOutputResponder does not expose // its stream, so we cannot be certain that our data is not being // processed on another stream. BaseTensor<T>::stream_synchronize(); responder_->ProcessTensor(name_.c_str(), TritonDtype<T>::value, triton_shape, reinterpret_cast<char*>(BaseTensor<T>::data()), BaseTensor<T>::mem_type(), BaseTensor<T>::device()); } private: std::string name_; std::shared_ptr<BackendOutputResponder> responder_; }; template <typename T, typename U, typename = std::enable_if_t<std::is_same_v<std::remove_const_t<U>, T>>> void copy(BaseTensor<T>& dst, BaseTensor<U>& src) { copy(dst.buffer(), src.buffer()); } /** * @brief Copy data from src Tensor into buffers indicated by iterators * * This method is provided to assist with distributing data from a single * Tensor into many smaller buffers which have been set up to receive a part * of the data from the src Tensor */ template <typename T, typename Iter> void copy(Iter begin, Iter end, BaseTensor<T>& src) { std::accumulate(begin, end, typename BaseTensor<T>::size_type{}, [&src](auto offset, auto& dst) { auto end_offset = offset + dst.size(); copy(dst.buffer(), src.buffer(), offset, end_offset); return end_offset; }); } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton
rapidsai_public_repos/rapids-triton/cpp/include/rapids_triton/tensor/dtype.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <triton/core/tritonserver.h> #include <cstdint> #include <iostream> #include <rapids_triton/utils/const_agnostic.hpp> namespace triton { namespace backend { namespace rapids { using DType = TRITONSERVER_DataType; auto constexpr DTypeBool = TRITONSERVER_TYPE_BOOL; auto constexpr DTypeUint8 = TRITONSERVER_TYPE_UINT8; auto constexpr DTypeChar = DTypeUint8; auto constexpr DTypeByte = DTypeUint8; auto constexpr DTypeUint16 = TRITONSERVER_TYPE_UINT16; auto constexpr DTypeUint32 = TRITONSERVER_TYPE_UINT32; auto constexpr DTypeUint64 = TRITONSERVER_TYPE_UINT64; auto constexpr DTypeInt8 = TRITONSERVER_TYPE_INT8; auto constexpr DTypeInt16 = TRITONSERVER_TYPE_INT16; auto constexpr DTypeInt32 = TRITONSERVER_TYPE_INT32; auto constexpr DTypeInt64 = TRITONSERVER_TYPE_INT64; auto constexpr DTypeFloat32 = TRITONSERVER_TYPE_FP32; auto constexpr DTypeFloat64 = TRITONSERVER_TYPE_FP64; template <DType D> struct TritonType { }; template <typename T, typename = void> struct TritonDtype { }; template <> struct TritonType<DTypeBool> { typedef bool type; }; template <typename T> struct TritonDtype<T, const_agnostic_same_t<T, bool>> { static constexpr DType value = DTypeBool; }; template <> struct TritonType<DTypeUint8> { typedef std::uint8_t type; }; template <typename T> struct TritonDtype<T, const_agnostic_same_t<T, std::uint8_t>> { static constexpr DType value = DTypeUint8; }; template <> struct TritonType<DTypeUint16> { typedef std::uint16_t type; }; template <typename T> struct TritonDtype<T, const_agnostic_same_t<T, std::uint16_t>> { static constexpr DType value = DTypeUint16; }; template <> struct TritonType<DTypeUint32> { typedef std::uint32_t type; }; template <typename T> struct TritonDtype<T, const_agnostic_same_t<T, std::uint32_t>> { static constexpr DType value = DTypeUint32; }; template <> struct TritonType<DTypeUint64> { typedef std::uint64_t type; }; template <typename T> struct TritonDtype<T, const_agnostic_same_t<T, std::uint64_t>> { static constexpr DType value = DTypeUint64; }; template <> struct TritonType<DTypeInt8> { typedef std::int8_t type; }; template <typename T> struct TritonDtype<T, const_agnostic_same_t<T, std::int8_t>> { static constexpr DType value = DTypeInt8; }; template <> struct TritonType<DTypeInt16> { typedef std::int16_t type; }; template <typename T> struct TritonDtype<T, const_agnostic_same_t<T, std::int16_t>> { static constexpr DType value = DTypeInt16; }; template <> struct TritonType<DTypeInt32> { typedef std::int32_t type; }; template <typename T> struct TritonDtype<T, const_agnostic_same_t<T, std::int32_t>> { static constexpr DType value = DTypeInt32; }; template <> struct TritonType<DTypeInt64> { typedef std::int64_t type; }; template <typename T> struct TritonDtype<T, const_agnostic_same_t<T, std::int64_t>> { static constexpr DType value = DTypeInt64; }; template <> struct TritonType<DTypeFloat32> { typedef float type; }; template <typename T> struct TritonDtype<T, const_agnostic_same_t<T, float>> { static constexpr DType value = DTypeFloat32; }; template <> struct TritonType<TRITONSERVER_TYPE_FP64> { typedef double type; }; template <typename T> struct TritonDtype<T, const_agnostic_same_t<T, double>> { static constexpr DType value = DTypeFloat64; }; inline std::ostream& operator<<(std::ostream& out, DType const& dtype) { out << TRITONSERVER_DataTypeString(dtype); return out; } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp
rapidsai_public_repos/rapids-triton/cpp/scripts/run-clang-format.py
# Copyright (c) 2019-2021, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This file was copied from the rapidsai/cuml repo from __future__ import print_function import sys import re import os import subprocess import argparse import tempfile import shutil EXPECTED_VERSION = "11.0.0" VERSION_REGEX = re.compile(r"clang-format version ([0-9.]+)") DEFAULT_DIRS = ["cpp/src", "cpp/include", "cpp/test"] def parse_args(): argparser = argparse.ArgumentParser("Runs clang-format on a project") argparser.add_argument("-dstdir", type=str, default=None, help="Directory to store the temporary outputs of" " clang-format. If nothing is passed for this, then" " a temporary dir will be created using `mkdtemp`") argparser.add_argument("-exe", type=str, default="clang-format", help="Path to clang-format exe") argparser.add_argument("-inplace", default=False, action="store_true", help="Replace the source files itself.") argparser.add_argument("-regex", type=str, default=r"[.](cu|cuh|h|hpp|cpp)$", help="Regex string to filter in sources") argparser.add_argument("-ignore", type=str, default=r"cannylab/bh[.]cu$", help="Regex used to ignore files from matched list") argparser.add_argument("-v", dest="verbose", action="store_true", help="Print verbose messages") argparser.add_argument("dirs", type=str, nargs="*", help="List of dirs where to find sources") args = argparser.parse_args() args.regex_compiled = re.compile(args.regex) args.ignore_compiled = re.compile(args.ignore) if args.dstdir is None: args.dstdir = tempfile.mkdtemp() ret = subprocess.check_output("%s --version" % args.exe, shell=True) ret = ret.decode("utf-8") version = VERSION_REGEX.match(ret) if version is None: raise Exception("Failed to figure out clang-format version!") version = version.group(1) if version != EXPECTED_VERSION: raise Exception("clang-format exe must be v%s found '%s'" % \ (EXPECTED_VERSION, version)) if len(args.dirs) == 0: args.dirs = DEFAULT_DIRS return args def list_all_src_files(file_regex, ignore_regex, srcdirs, dstdir, inplace): allFiles = [] for srcdir in srcdirs: for root, dirs, files in os.walk(srcdir): for f in files: if re.search(file_regex, f): src = os.path.join(root, f) if re.search(ignore_regex, src): continue if inplace: _dir = root else: _dir = os.path.join(dstdir, root) dst = os.path.join(_dir, f) allFiles.append((src, dst)) return allFiles def run_clang_format(src, dst, exe, verbose): dstdir = os.path.dirname(dst) if not os.path.exists(dstdir): os.makedirs(dstdir) # run the clang format command itself if src == dst: cmd = "%s -i %s" % (exe, src) else: cmd = "%s %s > %s" % (exe, src, dst) try: subprocess.check_call(cmd, shell=True) except subprocess.CalledProcessError: print("Failed to run clang-format! Maybe your env is not proper?") raise # run the diff to check if there are any formatting issues cmd = "diff -q %s %s >/dev/null" % (src, dst) try: subprocess.check_call(cmd, shell=True) if verbose: print("%s passed" % os.path.basename(src)) except subprocess.CalledProcessError: print("%s failed! 'diff %s %s' will show formatting violations!" % \ (os.path.basename(src), src, dst)) return False return True def main(): args = parse_args() # Attempt to making sure that we run this script from root of repo always if not os.path.exists(".git"): print("Error!! This needs to always be run from the root of repo") sys.exit(-1) all_files = list_all_src_files(args.regex_compiled, args.ignore_compiled, args.dirs, args.dstdir, args.inplace) # Check whether clang-format exists if shutil.which("clang-format") is None: print("clang-format not found. Exiting...") return # actual format checker status = True for src, dst in all_files: if not run_clang_format(src, dst, args.exe, args.verbose): status = False if not status: print("clang-format failed! You have 2 options:") print(" 1. Look at formatting differences above and fix them manually") print(" 2. Or run the below command to bulk-fix all these at once") print("Bulk-fix command: ") print(" python cpp/scripts/run-clang-format.py %s -inplace" % \ " ".join(sys.argv[1:])) sys.exit(-1) return if __name__ == "__main__": main()
0
rapidsai_public_repos/rapids-triton/cpp
rapidsai_public_repos/rapids-triton/cpp/cmake/doxygen.cmake
# Copyright (c) 2020, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # find_package(Doxygen 1.8.11) function(add_doxygen_target) if(Doxygen_FOUND) set(options "") set(oneValueArgs IN_DOXYFILE OUT_DOXYFILE CWD) set(multiValueArgs "") cmake_parse_arguments(dox "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN}) configure_file(${dox_IN_DOXYFILE} ${dox_OUT_DOXYFILE} @ONLY) add_custom_target(doc ${DOXYGEN_EXECUTABLE} ${dox_OUT_DOXYFILE} WORKING_DIRECTORY ${dox_CWD} VERBATIM COMMENT "Generate doxygen docs") else() message("add_doxygen_target: doxygen exe not found") endif() endfunction(add_doxygen_target)
0
rapidsai_public_repos/rapids-triton/cpp/cmake
rapidsai_public_repos/rapids-triton/cpp/cmake/modules/ConfigureCUDA.cmake
#============================================================================= # Copyright (c) 2021, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #============================================================================= if(DISABLE_DEPRECATION_WARNINGS) list(APPEND RAPIDS_TRITON_CXX_FLAGS -Wno-deprecated-declarations) list(APPEND RAPIDS_TRITON_CUDA_FLAGS -Xcompiler=-Wno-deprecated-declarations) endif() if(CMAKE_COMPILER_IS_GNUCXX) list(APPEND RAPIDS_TRITON_CXX_FLAGS -Wall -Werror -Wno-unknown-pragmas -Wno-error=deprecated-declarations) endif() list(APPEND RAPIDS_TRITON_CUDA_FLAGS --expt-extended-lambda --expt-relaxed-constexpr) # set warnings as errors if(CMAKE_CUDA_COMPILER_VERSION VERSION_GREATER_EQUAL 11.2.0) list(APPEND RAPIDS_TRITON_CUDA_FLAGS -Werror=all-warnings) endif() list(APPEND RAPIDS_TRITON_CUDA_FLAGS -Xcompiler=-Wall,-Werror,-Wno-error=deprecated-declarations) # Option to enable line info in CUDA device compilation to allow introspection when profiling / memchecking if(CUDA_ENABLE_LINEINFO) list(APPEND RAPIDS_TRITON_CUDA_FLAGS -lineinfo) endif() # Debug options if(CMAKE_BUILD_TYPE MATCHES Debug) message(VERBOSE "RAPIDS_TRITON: Building with debugging flags") list(APPEND RAPIDS_TRITON_CUDA_FLAGS -G -Xcompiler=-rdynamic) endif()
0
rapidsai_public_repos/rapids-triton/cpp/cmake
rapidsai_public_repos/rapids-triton/cpp/cmake/thirdparty/get_rapidjson.cmake
#============================================================================= # Copyright (c) 2021, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #============================================================================= # TODO(wphicks): Pass in version function(find_and_configure_rapidjson VERSION) rapids_cpm_find(rapidjson ${VERSION} GLOBAL_TARGETS rapidjson::rapidjson BUILD_EXPORT_SET rapids_triton-exports INSTALL_EXPORT_SET rapids_triton-exports CPM_ARGS GIT_REPOSITORY https://github.com/Tencent/rapidjson GIT_TAG "v${VERSION}" GIT_SHALLOW ON OPTIONS "RAPIDJSON_BUILD_DOC OFF" "RAPIDJSON_BUILD_EXAMPLES OFF" "RAPIDJSON_BUILD_TESTS OFF" "RAPIDJSON_BUILD_THIRDPARTY_GTEST OFF" ) endfunction() find_and_configure_rapidjson("1.1.0")
0
rapidsai_public_repos/rapids-triton/cpp/cmake
rapidsai_public_repos/rapids-triton/cpp/cmake/thirdparty/get_raft.cmake
#============================================================================= # Copyright (c) 2021, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #============================================================================= function(find_and_configure_raft) set(oneValueArgs VERSION FORK PINNED_TAG) cmake_parse_arguments(PKG "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN} ) rapids_cpm_find(raft ${PKG_VERSION} GLOBAL_TARGETS raft::raft BUILD_EXPORT_SET rapids_triton-exports INSTALL_EXPORT_SET rapids_triton-exports CPM_ARGS GIT_REPOSITORY https://github.com/${PKG_FORK}/raft.git GIT_TAG ${PKG_PINNED_TAG} SOURCE_SUBDIR cpp OPTIONS "BUILD_TESTS OFF" "RAFT_COMPILE_LIBRARIES OFF" ) message(VERBOSE "RAPIDS_TRITON: Using RAFT located in ${raft_SOURCE_DIR}") endfunction() set(RAPIDS_TRITON_MIN_VERSION_raft "${RAPIDS_TRITON_VERSION_MAJOR}.${RAPIDS_TRITON_VERSION_MINOR}.00") set(RAPIDS_TRITON_BRANCH_VERSION_raft "${RAPIDS_TRITON_VERSION_MAJOR}.${RAPIDS_TRITON_VERSION_MINOR}") # Change pinned tag here to test a commit in CI # To use a different RAFT locally, set the CMake variable # CPM_raft_SOURCE=/path/to/local/raft find_and_configure_raft(VERSION ${RAPIDS_TRITON_MIN_VERSION_raft} FORK rapidsai PINNED_TAG branch-${RAPIDS_TRITON_BRANCH_VERSION_raft} )
0
rapidsai_public_repos/rapids-triton/cpp/cmake
rapidsai_public_repos/rapids-triton/cpp/cmake/thirdparty/get_gtest.cmake
#============================================================================= # Copyright (c) 2021, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #============================================================================= function(find_and_configure_gtest) include(${rapids-cmake-dir}/cpm/gtest.cmake) rapids_cpm_gtest() endfunction() find_and_configure_gtest()
0
rapidsai_public_repos/rapids-triton/cpp/cmake
rapidsai_public_repos/rapids-triton/cpp/cmake/thirdparty/get_triton.cmake
#============================================================================= # Copyright (c) 2021, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #============================================================================= include(FetchContent) FetchContent_Declare( repo-common GIT_REPOSITORY https://github.com/triton-inference-server/common.git GIT_TAG ${TRITON_COMMON_REPO_TAG} GIT_SHALLOW ON ) FetchContent_Declare( repo-core GIT_REPOSITORY https://github.com/triton-inference-server/core.git GIT_TAG ${TRITON_CORE_REPO_TAG} GIT_SHALLOW ON ) FetchContent_Declare( repo-backend GIT_REPOSITORY https://github.com/triton-inference-server/backend.git GIT_TAG ${TRITON_BACKEND_REPO_TAG} GIT_SHALLOW ON ) FetchContent_MakeAvailable(repo-common repo-core repo-backend)
0
rapidsai_public_repos/rapids-triton/cpp/cmake
rapidsai_public_repos/rapids-triton/cpp/cmake/thirdparty/get_rmm.cmake
#============================================================================= # Copyright (c) 2021, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #============================================================================= function(find_and_configure_rmm) include(${rapids-cmake-dir}/cpm/rmm.cmake) rapids_cpm_rmm() endfunction() find_and_configure_rmm()
0
rapidsai_public_repos/rapids-triton/cpp
rapidsai_public_repos/rapids-triton/cpp/src/api.cc
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include <model.h> #include <names.h> #include <shared_state.h> #include <stdint.h> #include <triton/backend/backend_common.h> #include <triton/backend/backend_model.h> #include <triton/backend/backend_model_instance.h> #include <rapids_triton/triton/api/execute.hpp> #include <rapids_triton/triton/api/initialize.hpp> #include <rapids_triton/triton/api/instance_finalize.hpp> #include <rapids_triton/triton/api/instance_initialize.hpp> #include <rapids_triton/triton/api/model_finalize.hpp> #include <rapids_triton/triton/api/model_initialize.hpp> #include <rapids_triton/triton/model_instance_state.hpp> #include <rapids_triton/triton/model_state.hpp> namespace triton { namespace backend { namespace NAMESPACE { using ModelState = rapids::TritonModelState<RapidsSharedState>; using ModelInstanceState = rapids::ModelInstanceState<RapidsModel, RapidsSharedState>; extern "C" { /** Confirm that backend is compatible with Triton's backend API version */ TRITONSERVER_Error* TRITONBACKEND_Initialize(TRITONBACKEND_Backend* backend) { return rapids::triton_api::initialize(backend); } TRITONSERVER_Error* TRITONBACKEND_ModelInitialize(TRITONBACKEND_Model* model) { return rapids::triton_api::model_initialize<ModelState>(model); } TRITONSERVER_Error* TRITONBACKEND_ModelFinalize(TRITONBACKEND_Model* model) { return rapids::triton_api::model_finalize<ModelState>(model); } TRITONSERVER_Error* TRITONBACKEND_ModelInstanceInitialize( TRITONBACKEND_ModelInstance* instance) { return rapids::triton_api::instance_initialize<ModelState, ModelInstanceState>(instance); } TRITONSERVER_Error* TRITONBACKEND_ModelInstanceFinalize( TRITONBACKEND_ModelInstance* instance) { return rapids::triton_api::instance_finalize<ModelInstanceState>(instance); } TRITONSERVER_Error* TRITONBACKEND_ModelInstanceExecute( TRITONBACKEND_ModelInstance* instance, TRITONBACKEND_Request** raw_requests, uint32_t const request_count) { return rapids::triton_api::execute<ModelState, ModelInstanceState>( instance, raw_requests, static_cast<std::size_t>(request_count)); } } // extern "C" } // namespace NAMESPACE } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp
rapidsai_public_repos/rapids-triton/cpp/src/shared_state.h
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <names.h> #include <memory> #include <rapids_triton/model/shared_state.hpp> namespace triton { namespace backend { namespace NAMESPACE { /* Triton allows multiple instances of a single model to be instantiated at the * same time (e.g. on different GPUs). All instances of a model share access to * an object which manages any state that can be shared across all instances. * Any logic necessary for managing such state should be implemented in a * struct named RapidsSharedState, as shown here. Models may access this shared * state object via the `get_shared_state` method, which returns a shared * pointer to the RapidsSharedState object. * * Not all backends require shared state, so leaving this implementation empty * is entirely valid */ struct RapidsSharedState : rapids::SharedModelState { RapidsSharedState(std::unique_ptr<common::TritonJson::Value>&& config) : rapids::SharedModelState{std::move(config)} { } void load() {} void unload() {} }; } // namespace NAMESPACE } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp
rapidsai_public_repos/rapids-triton/cpp/src/CMakeLists.txt
#============================================================================= # Copyright (c) 2021, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #============================================================================= # keep the files in alphabetical order! add_library( triton_rapids-identity SHARED src/api.cc ) if(TRITON_ENABLE_GPU) set_target_properties(triton_rapids-identity PROPERTIES BUILD_RPATH "\$ORIGIN" # set target compile options CXX_STANDARD 17 CXX_STANDARD_REQUIRED ON CUDA_STANDARD 17 CUDA_STANDARD_REQUIRED ON POSITION_INDEPENDENT_CODE ON INTERFACE_POSITION_INDEPENDENT_CODE ON ) else() set_target_properties(triton_rapids-identity PROPERTIES BUILD_RPATH "\$ORIGIN" # set target compile options CXX_STANDARD 17 CXX_STANDARD_REQUIRED ON POSITION_INDEPENDENT_CODE ON INTERFACE_POSITION_INDEPENDENT_CODE ON ) endif() target_compile_options(triton_rapids-identity PRIVATE "$<$<COMPILE_LANGUAGE:CXX>:${RAPIDS_TRITON_CXX_FLAGS}>" "$<$<COMPILE_LANGUAGE:CUDA>:${RAPIDS_TRITON_CUDA_FLAGS}>" ) target_include_directories(triton_rapids-identity PRIVATE "$<BUILD_INTERFACE:${RAPIDS_TRITON_SOURCE_DIR}/include>" "${CMAKE_CURRENT_SOURCE_DIR}/src" ) target_link_libraries(triton_rapids-identity PRIVATE $<$<BOOL:${TRITON_ENABLE_GPU}>:rmm::rmm> $<$<BOOL:${TRITON_ENABLE_GPU}>:raft::raft> triton-core-serverstub triton-backend-utils "${TRITONSERVER_LIB}" $<TARGET_NAME_IF_EXISTS:conda_env> ) install( TARGETS triton_rapids-identity LIBRARY DESTINATION /opt/tritonserver/backends/rapids-identity )
0
rapidsai_public_repos/rapids-triton/cpp
rapidsai_public_repos/rapids-triton/cpp/src/model.h
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #ifdef TRITON_ENABLE_GPU #include <cuda_runtime_api.h> #else #include <rapids_triton/cpu_only/cuda_runtime_replacement.hpp> #endif #include <names.h> #include <shared_state.h> #include <memory> #include <optional> #include <rapids_triton/batch/batch.hpp> // rapids::Batch #include <rapids_triton/memory/types.hpp> // rapids::MemoryType #include <rapids_triton/model/model.hpp> // rapids::Model #include <rapids_triton/tensor/tensor.hpp> // rapids::copy #include <rapids_triton/triton/deployment.hpp> // rapids::DeploymentType #include <rapids_triton/triton/device.hpp> // rapids::device_id_t namespace triton { namespace backend { namespace NAMESPACE { /* Any logic necessary to perform inference with a model and manage its data * should be implemented in a struct named RapidsModel, as shown here */ struct RapidsModel : rapids::Model<RapidsSharedState> { /*************************************************************************** * BOILERPLATE * * *********************************************************************** * * The following constructor can be copied directly into any model * implementation. **************************************************************************/ RapidsModel(std::shared_ptr<RapidsSharedState> shared_state, rapids::device_id_t device_id, cudaStream_t default_stream, rapids::DeploymentType deployment_type, std::string const& filepath) : rapids::Model<RapidsSharedState>( shared_state, device_id, default_stream, deployment_type, filepath) { } /*************************************************************************** * BASIC FEATURES * * *********************************************************************** * * The only method that *must* be implemented for a viable model is the * `predict` method, but the others presented here are often used for basic * model implementations. Filling out these methods should take care of most * use cases. **************************************************************************/ /*************************************************************************** * predict * * *********************************************************************** * * This method performs the actual inference step on input data. Implementing * a predict function requires four steps: * 1. Call `get_input` on the provided `Batch` object for each of the input * tensors named in the config file for this backend. This provides a * `Tensor` object containing the input data. * 2. Call `get_output` on the provided `Batch` object for each of the output * tensors named in the config file for this backend. This provides a * `Tensor` object to which output values can be written. * 3. Perform inference based on the input Tensors and store the results in * the output Tensors. `some_tensor.data()` can be used to retrieve a raw * pointer to the underlying data. * 4. Call the `finalize` method on all output tensors. **************************************************************************/ void predict(rapids::Batch& batch) const { // 1. Acquire a tensor representing the input named "input__0" auto input = get_input<float>(batch, "input__0"); // 2. Acquire a tensor representing the output named "output__0" auto output = get_output<float>(batch, "output__0"); // 3. Perform inference. In this example, we simply copy the data from the // input to the output tensor. rapids::copy(output, input); // 4. Call finalize on all output tensors. In this case, we have just one // output, so we call finalize on it. output.finalize(); } /*************************************************************************** * load / unload * * *********************************************************************** * * These methods can be used to perform one-time loading/unloading of * resources when a model is created. For example, data representing the * model may be loaded onto the GPU in the `load` method and unloaded in the * `unload` method. This data will then remain loaded while the server is * running. * * While these methods take no arguments, it is typical to read any necessary * input from the model configuration file by using the `get_config_param` * method. Any parameters defined in the "parameters" section of the config * can be accessed by name in this way. The maximum batch size can also be * retrieved using the name "max_batch_size". * * These methods need not be explicitly implemented if no loading/unloading * logic is required, but we show them here for illustrative purposes. **************************************************************************/ void load() {} void unload() {} /*************************************************************************** * ADVANCED FEATURES * * *********************************************************************** * * None of the following methods are required to be implemented in order to * create a valid model, but they are presented here for those who require * the additional functionality they provide. **************************************************************************/ /*************************************************************************** * preferred_mem_type / preferred_mem_type_in / preferred_mem_type_out * * *********************************************************************** * * If implemented, `preferred_mem_type` allows for control over when input * and output data are provided on the host versus on device. In the case * that a model prefers to receive its input on-host but return output * on-device (or vice versa), `preferred_mem_type_in` and * `preferred_mem_type_out` can be used for even more precise control. * * In this example, we simply return `std::nullopt` to indicate that the * model has no preference on its input/output data locations. Note that the * Batch being processed is taken as input to this function to facilitate * implementations that may switch their preferred memory location based on * properties of the batch. * * Valid MemoryType options to return are rapids::HostMemory and * rapids::DeviceMemory. **************************************************************************/ std::optional<rapids::MemoryType> preferred_mem_type(rapids::Batch& batch) const { return std::nullopt; } }; } // namespace NAMESPACE } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp
rapidsai_public_repos/rapids-triton/cpp/src/names.h
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once /* Triton expects certain definitions within its backend libraries to follow * specific naming conventions. Specifically, for a backend named * "rapids_identity," most definitions should appear within a namespace called * triton::backend::rapids_identity. * * In order to facilitate this with minimal effort on the part of backend * developers, we ask that you put the name of your backend here. This macro is * then used to propagate the correct namespace name wherever it is needed in * the impl and interface code. */ #define NAMESPACE rapids_identity
0
rapidsai_public_repos/rapids-triton/cpp
rapidsai_public_repos/rapids-triton/cpp/test/exceptions.cpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef TRITON_ENABLE_GPU #include <cuda_runtime_api.h> #else #include <rapids_triton/cpu_only/cuda_runtime_replacement.hpp> #endif #include <gtest/gtest.h> #include <rapids_triton/exceptions.hpp> #include <string> namespace triton { namespace backend { namespace rapids { TEST(RapidsTriton, default_except) { try { throw TritonException(); } catch (TritonException const& err) { EXPECT_EQ(std::string(err.what()), std::string("encountered unknown error")); } } TEST(RapidsTriton, msg_except) { auto msg = std::string("TEST ERROR MESSAGE"); try { throw TritonException(Error::Internal, msg); } catch (TritonException const& err) { EXPECT_EQ(std::string(err.what()), msg); } try { throw TritonException(Error::Internal, msg.c_str()); } catch (TritonException const& err) { EXPECT_EQ(std::string(err.what()), msg); } try { throw TritonException(Error::Internal, msg); } catch (TritonException const& err) { try { throw(TritonException(err.error())); } catch (TritonException const& err2) { EXPECT_EQ(std::string(err2.what()), msg); } } } TEST(RapidsTriton, triton_check) { auto msg = std::string("TEST ERROR MESSAGE"); EXPECT_THROW(triton_check(TRITONSERVER_ErrorNew(Error::Internal, msg.c_str())), TritonException); triton_check(nullptr); } TEST(RapidsTriton, cuda_check) { #ifdef TRITON_ENABLE_GPU EXPECT_THROW(cuda_check(cudaError::cudaErrorMissingConfiguration), TritonException); cuda_check(cudaError::cudaSuccess); #else EXPECT_THROW(cuda_check(cudaError::cudaErrorNonGpuBuild), TritonException); #endif } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp
rapidsai_public_repos/rapids-triton/cpp/test/build_control.cpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include <gtest/gtest.h> #include <rapids_triton/build_control.hpp> namespace triton { namespace backend { namespace rapids { TEST(RapidsTriton, build_control) { #ifdef TRITON_ENABLE_GPU ASSERT_EQ(IS_GPU_BUILD, true) << "IS_GPU_BUILD constant has wrong value\n"; #else ASSERT_EQ(IS_GPU_BUILD, false) << "IS_GPU_BUILD constant has wrong value\n"; #endif } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp
rapidsai_public_repos/rapids-triton/cpp/test/CMakeLists.txt
#============================================================================= # Copyright (c) 2021, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #============================================================================= # keep the files in alphabetical order! add_executable(test_rapids_triton test/batch/batch.cpp test/build_control.cpp test/exceptions.cpp test/memory/buffer.cpp test/memory/detail/copy.cpp test/memory/detail/owned_device_buffer.cpp test/memory/resource.cpp test/memory/types.cpp test/tensor/dtype.cpp test/tensor/tensor.cpp test/test.cpp test/triton/api/execute.cpp test/triton/api/initialize.cpp test/triton/api/instance_finalize.cpp test/triton/api/instance_initialize.cpp test/triton/api/model_finalize.cpp test/triton/api/model_initialize.cpp test/triton/backend.cpp test/triton/config.cpp test/triton/deployment.cpp test/triton/device.cpp test/triton/input.cpp test/triton/logging.cpp test/triton/model.cpp test/triton/model_instance.cpp test/triton/requests.cpp test/triton/responses.cpp test/triton/statistics.cpp test/utils/const_agnostic.cpp test/utils/narrow.cpp ) IF(TRITON_ENABLE_GPU) set_target_properties(test_rapids_triton PROPERTIES BUILD_RPATH "\$ORIGIN" # set target compile options CXX_STANDARD 17 CXX_STANDARD_REQUIRED ON CUDA_STANDARD 17 CUDA_STANDARD_REQUIRED ON POSITION_INDEPENDENT_CODE ON INTERFACE_POSITION_INDEPENDENT_CODE ON ) else() set_target_properties(test_rapids_triton PROPERTIES BUILD_RPATH "\$ORIGIN" # set target compile options CXX_STANDARD 17 CXX_STANDARD_REQUIRED ON POSITION_INDEPENDENT_CODE ON INTERFACE_POSITION_INDEPENDENT_CODE ON ) endif() target_compile_options(test_rapids_triton PRIVATE "$<$<COMPILE_LANGUAGE:CXX>:${RAPIDS_TRITON_CXX_FLAGS}>" "$<$<COMPILE_LANGUAGE:CUDA>:${RAPIDS_TRITON_CUDA_FLAGS}>" ) target_include_directories(test_rapids_triton PUBLIC "$<BUILD_INTERFACE:${RAPIDS_TRITON_SOURCE_DIR}/include>" "$<BUILD_INTERFACE:${RAPIDS_TRITON_SOURCE_DIR}/test>" ) find_library( TRITONSERVER_LIB tritonserver PATHS /opt/tritonserver/lib ) target_link_libraries(test_rapids_triton PRIVATE $<$<BOOL:${TRITON_ENABLE_GPU}>:rmm::rmm> $<$<BOOL:${TRITON_ENABLE_GPU}>:raft::raft> triton-core-serverstub triton-backend-utils gmock gmock_main GTest::gtest GTest::gtest_main "${TRITONSERVER_LIB}" $<TARGET_NAME_IF_EXISTS:conda_env> )
0
rapidsai_public_repos/rapids-triton/cpp
rapidsai_public_repos/rapids-triton/cpp/test/test.cpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include <gtest/gtest.h> #include <rapids_triton.hpp> namespace triton { namespace backend { namespace rapids { TEST(RapidsTriton, installed) { std::cout << test_install() << "\n"; } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/test
rapidsai_public_repos/rapids-triton/cpp/test/utils/const_agnostic.cpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include <gtest/gtest.h> #include <rapids_triton/utils/const_agnostic.hpp> #include <type_traits> namespace triton { namespace backend { namespace rapids { TEST(RapidsTriton, const_agnostic) { static_assert(std::is_same<const_agnostic_same_t<bool const, bool>, void>::value); static_assert(std::is_same<const_agnostic_same_t<bool, bool>, void>::value); } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/test
rapidsai_public_repos/rapids-triton/cpp/test/utils/narrow.cpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include <gtest/gtest.h> #include <rapids_triton/utils/narrow.hpp> #include <string> namespace triton { namespace backend { namespace rapids { TEST(RapidsTriton, narrow) { EXPECT_THROW(narrow<std::size_t>(-1), TritonException); narrow<std::size_t>(int{5}); EXPECT_THROW(narrow<int>(std::numeric_limits<std::size_t>::max()), TritonException); narrow<int>(std::size_t{5}); } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/test
rapidsai_public_repos/rapids-triton/cpp/test/triton/model.cpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include <gtest/gtest.h> #include <string> #include <rapids_triton/model/shared_state.hpp> namespace triton { namespace backend { namespace rapids { bool TestBoolString(std::string b){ auto model_config = std::make_unique<common::TritonJson::Value>(common::TritonJson::ValueType::OBJECT); model_config->AddInt("max_batch_size", 1); model_config->Add("output", common::TritonJson::Value(common::TritonJson::ValueType::ARRAY)); auto params = common::TritonJson::Value(common::TritonJson::ValueType::OBJECT); auto string_value = common::TritonJson::Value(common::TritonJson::ValueType::OBJECT); string_value.AddString("string_value", b); params.Add("some_bool", std::move(string_value)); model_config->Add("parameters",std::move(params)); SharedModelState s(std::move(model_config)); return s.get_config_param<bool>("some_bool"); } TEST(RapidsTriton, bool_param) { EXPECT_TRUE(TestBoolString("true")); EXPECT_FALSE(TestBoolString("false")); EXPECT_THROW( { try { TestBoolString("True"); } catch (const TritonException& e) { EXPECT_STREQ("Expected 'true' or 'false' for parameter 'some_bool', got: 'True'", e.what()); throw; } }, TritonException); } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/test
rapidsai_public_repos/rapids-triton/cpp/test/triton/statistics.cpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include <rapids_triton/triton/statistics.hpp>
0
rapidsai_public_repos/rapids-triton/cpp/test
rapidsai_public_repos/rapids-triton/cpp/test/triton/deployment.cpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include <rapids_triton/triton/deployment.hpp>
0
rapidsai_public_repos/rapids-triton/cpp/test
rapidsai_public_repos/rapids-triton/cpp/test/triton/model_instance.cpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include <rapids_triton/triton/model_instance.hpp>
0
rapidsai_public_repos/rapids-triton/cpp/test
rapidsai_public_repos/rapids-triton/cpp/test/triton/logging.cpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include <gtest/gtest.h> #include <iostream> #include <rapids_triton/triton/logging.hpp> namespace triton { namespace backend { namespace rapids { TEST(RapidsTriton, logging) { log_debug("Debug test message"); log_info("Info test message"); log_warn("Warn test message"); log_error("Error test message"); } TEST(RapidsTriton, stream_logging) { log_debug() << "Streamed debug test message"; log_info() << "Streamed info test message"; log_warn() << "Streamed warn test message"; log_error() << "Streamed error test message"; } } // namespace rapids } // namespace backend } // namespace triton
0
rapidsai_public_repos/rapids-triton/cpp/test
rapidsai_public_repos/rapids-triton/cpp/test/triton/requests.cpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include <rapids_triton/triton/requests.hpp>
0
rapidsai_public_repos/rapids-triton/cpp/test
rapidsai_public_repos/rapids-triton/cpp/test/triton/input.cpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include <rapids_triton/triton/input.hpp>
0
rapidsai_public_repos/rapids-triton/cpp/test
rapidsai_public_repos/rapids-triton/cpp/test/triton/config.cpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include <rapids_triton/triton/config.hpp>
0
rapidsai_public_repos/rapids-triton/cpp/test
rapidsai_public_repos/rapids-triton/cpp/test/triton/device.cpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include <rapids_triton/triton/device.hpp>
0
rapidsai_public_repos/rapids-triton/cpp/test
rapidsai_public_repos/rapids-triton/cpp/test/triton/responses.cpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include <rapids_triton/triton/responses.hpp>
0
rapidsai_public_repos/rapids-triton/cpp/test
rapidsai_public_repos/rapids-triton/cpp/test/triton/backend.cpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include <rapids_triton/triton/backend.hpp>
0
rapidsai_public_repos/rapids-triton/cpp/test/triton
rapidsai_public_repos/rapids-triton/cpp/test/triton/api/instance_finalize.cpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include <rapids_triton/triton/api/instance_finalize.hpp>
0
rapidsai_public_repos/rapids-triton/cpp/test/triton
rapidsai_public_repos/rapids-triton/cpp/test/triton/api/initialize.cpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include <rapids_triton/triton/api/initialize.hpp>
0