Viam C++ SDK current
Loading...
Searching...
No Matches
Public Types | Public Member Functions | List of all members
viam::sdk::impl::MLModelServiceClient Class Reference

#include <mlmodel_client.hpp>

+ Inheritance diagram for viam::sdk::impl::MLModelServiceClient:

Public Types

using interface_type = MLModelService
 
using service_type = viam::service::mlmodel::v1::MLModelService
 
- Public Types inherited from viam::sdk::MLModelService
template<typename T >
using tensor_view = typename make_tensor_view_<T>::type
 
using signed_integral_base_types
 
using unsigned_integral_base_types
 
using integral_base_types
 
using fp_base_types = boost::mpl::list<float, double>
 
using base_types = boost::mpl::joint_view<integral_base_types, fp_base_types>
 
using tensor_view_types
 
using tensor_views = boost::make_variant_over<tensor_view_types>::type
 
using named_tensor_views = std::unordered_map<std::string, tensor_views>
 

Public Member Functions

 MLModelServiceClient (std::string name, std::shared_ptr< grpc::Channel > channel)
 
std::shared_ptr< named_tensor_views > infer (const named_tensor_views &inputs, const ProtoStruct &extra) override
 Runs the model against the input tensors and returns inference results as tensors.
 
struct metadata metadata (const ProtoStruct &extra) override
 Returns metadata describing the inputs and outputs of the model.
 
std::shared_ptr< named_tensor_views > infer (const named_tensor_views &inputs)
 Runs the model against the input tensors and returns inference results as tensors.
 
virtual std::shared_ptr< named_tensor_views > infer (const named_tensor_views &inputs, const ProtoStruct &extra)=0
 Runs the model against the input tensors and returns inference results as tensors.
 
struct metadata metadata ()
 Returns metadata describing the inputs and outputs of the model.
 
virtual struct metadata metadata (const ProtoStruct &extra)=0
 Returns metadata describing the inputs and outputs of the model.
 
- Public Member Functions inherited from viam::sdk::MLModelService
API api () const override
 Returns the API associated with a particular resource.
 
std::shared_ptr< named_tensor_views > infer (const named_tensor_views &inputs)
 Runs the model against the input tensors and returns inference results as tensors.
 
struct metadata metadata ()
 Returns metadata describing the inputs and outputs of the model.
 
- Public Member Functions inherited from viam::sdk::Service
Name get_resource_name () const override
 Returns the Name for a particular resource.
 
- Public Member Functions inherited from viam::sdk::Resource
 Resource (std::string name)
 
virtual std::string name () const
 Return the resource's name.
 

Additional Inherited Members

- Static Public Member Functions inherited from viam::sdk::MLModelService
template<typename T >
static tensor_view< T > make_tensor_view (const T *data, std::size_t size, typename tensor_view< T >::shape_type shape)
 
- Protected Member Functions inherited from viam::sdk::MLModelService
 MLModelService (std::string name)
 
- Protected Member Functions inherited from viam::sdk::Service
 Service (std::string name)
 
- Protected Member Functions inherited from viam::sdk::Resource
Name get_resource_name (const std::string &type) const
 

Detailed Description

An MLModelServiceClient provides client-side access to a remotely served ML Model Service. Use this class to communicate with MLModelService instances running elsewhere.

Member Function Documentation

◆ infer() [1/3]

std::shared_ptr< named_tensor_views > viam::sdk::MLModelService::infer ( const named_tensor_views & inputs)
inline

Runs the model against the input tensors and returns inference results as tensors.

Parameters
`inputs`The input tensors on which to run inference.
Returns
The results of the inference as a shared pointer to named_tensor_views. The data viewed by the views is only valid for the lifetime of the returned shared_ptr.

◆ infer() [2/3]

std::shared_ptr< named_tensor_views > viam::sdk::impl::MLModelServiceClient::infer ( const named_tensor_views & inputs,
const ProtoStruct & extra )
overridevirtual

Runs the model against the input tensors and returns inference results as tensors.

Parameters
`inputs`The input tensors on which to run inference.
`extra`Any additional arguments to the method.
Returns
The results of the inference as a shared pointer to named_tensor_views. The data viewed by the views is only valid for the lifetime of the returned shared_ptr.

Implements viam::sdk::MLModelService.

◆ infer() [3/3]

virtual std::shared_ptr< named_tensor_views > viam::sdk::MLModelService::infer ( const named_tensor_views & inputs,
const ProtoStruct & extra )
virtual

Runs the model against the input tensors and returns inference results as tensors.

Parameters
`inputs`The input tensors on which to run inference.
`extra`Any additional arguments to the method.
Returns
The results of the inference as a shared pointer to named_tensor_views. The data viewed by the views is only valid for the lifetime of the returned shared_ptr.

Implements viam::sdk::MLModelService.

◆ metadata() [1/2]

struct metadata viam::sdk::impl::MLModelServiceClient::metadata ( const ProtoStruct & extra)
overridevirtual

Returns metadata describing the inputs and outputs of the model.

Parameters
`extra`Any additional arguments to the method.

Implements viam::sdk::MLModelService.

◆ metadata() [2/2]

virtual struct metadata viam::sdk::MLModelService::metadata ( const ProtoStruct & extra)
virtual

Returns metadata describing the inputs and outputs of the model.

Parameters
`extra`Any additional arguments to the method.

Implements viam::sdk::MLModelService.


The documentation for this class was generated from the following file: