The Velocitas development model is centered around what are known as
Vehicle Apps
. Automation allows engineers to make high-impact changes frequently and deploy Vehicle Apps through cloud backends as over-the-air updates. The Vehicle App development model is about speed and agility paired with state-of-the-art software quality.
Development Architecture
Velocitas provides a flexible development architecture for
Vehicle Apps
. The following diagram shows the major components of the Velocitas stack.
Vehicle Apps
The Vehicle Applications (Vehicle Apps) contain the business logic that needs to be executed on a vehicle. A Vehicle App is implemented on top of a
Vehicle Model
and its underlying language-specific
SDK
. Many concepts of cloud-native and
twelve-factor
applications apply to Vehicle Apps as well and are summarized in the next chapter.
Vehicle Models
A Vehicle Model makes it possible to easily get vehicle data from the
Databroker
and to execute remote procedure calls over gRPC against
Vehicle Services
and other
Vehicle Apps
. It is generated from the underlying
semantic models
for a concrete programming language as a graph-based, strongly-typed, intellisense-enabled library. The elements of the vehicle models are defined by the
SDKs
.
SDKs
Our SDKs, available for different programming languages, are the foundation for the vehicle abstraction provided by the vehicle model Furthermore, they offer abstraction from the underlying middleware and communication protocols, besides providing the base classes and utilities for the Vehicle Apps.
SDKs are available for Python and C++, currently. Further SDKs for Rust and C are planned.
Vehicle Services
Vehicle Services provide service interfaces to control actuators or to trigger (complex) actions. E.g. they communicate with the vehicle internal networks like CAN or Ethernet, which are connected to actuators, electronic control units (ECUs) and other vehicle computers (VCs). They may provide a simulation mode to run without a network interface. Vehicle services may feed data to the
Databroker
and may expose gRPC endpoints, which can be invoked by
Vehicle Apps
over a
Vehicle Model
.
KUKSA Databroker
Vehicle data is stored in the KUKSA Databroker conforming to an underlying
Semantic Model
like
VSS
.
Vehicle Apps
can either pull this data or subscribe for updates. In addition, it supports rule-based access to reduce the number of updates sent to the Vehicle App.
Semantic models
The Vehicle Signal Specification (
VSS
) provides a domain taxonomy for vehicle signals and defines the vehicle data semantically, which is exchanged between Vehicle Apps and the Databroker.
The Velocitas SDK is using
VSS
as the semantic model for the Vehicle Model.
Vehicle Service models can be defined with
Protobuf service definitions
.
Communication Protocols
Asynchronous communication between
Vehicle Apps
and other vehicle components, as well as cloud connectivity, is facilitated through
MQTT
messaging. Direct, synchronous communication between
Vehicle Apps
,
Vehicle Services
and the
Databroker
is based on the
gRPC
protocol.
Middleware Abstraction
Velocitas basically provides middleware abstraction interfaces for service discovery, pubsub messaging, and other cross-cutting functionalites.
At the moment, Velocitas just offers a (what we call) “native middleware” implementation, which does not provide (gRPC) service discovery. Instead, addresses and port number of services need to be provided via environment variables to an app; e.g. SDV_VEHICLEDATABROKER_ADDRESS=grpc://localhost:55555.
The support of Dapr as middleware has recently been removed.
Vehicle Edge Operating System
Vehicle Apps
are expected to run on a
Linux
-based operating system. An OCI-compliant container runtime is required to host the Vehicle App containers. For publish/subscribe messaging a MQTT broker must be available (e.g.,
Eclipse Mosquitto
).
Vehicle App Characteristics
The following aspects are important characteristics for
Vehicle Apps
:
Code base:
Every Vehicle App is stored in its own repository. Tracked by version control, it can be deployed to multiple environments.
Polyglot:Vehicle Apps can be written in any programming language. System-level programming languages like Rust and C/C++ are particularly relevant for limited hardware resources found in vehicles, but higher-level languages like Python and JavaScript are also considered for special use cases.
OCI-compliant containers:Vehicle Apps are deployed as OCI-compliant containers. The size of these containers should be minimal to fit on constrained devices.
Isolation:
Each Vehicle App should execute in its own process and should be self-contained with its interfaces and functionality exposed on its own port.
Configurations:
Configuration information is separated from the code base of the Vehicle App, so that the same deployment can propagate across environments with their respective configuration applied.
Disposability:
Favor fast startup and support graceful shutdowns to leave the system in a correct state.
Observability:Vehicle Apps provide traces, metrics and logs of every part of the application using Open Telemetry.
Over-the-air update capability:Vehicle Apps can be deployed via cloud backends like
Pantaris
and updated in vehicles frequently over the air through
NextGen OTA updates
.
Development Process
The starting point for developing
Vehicle Apps
is a
Semantic Model
of the vehicle data and vehicle services. Based on the Semantic Model, language-specific
Vehicle Models
are generated. Vehicle Models are then distributed as packages to the respective package manager of the chosen programming language (e.g. pip, cargo, npm, …).
After a Vehicle Model is available for the chosen programming language, the Vehicle App can be developed using the generated Vehicle Model and its SDK.
Further information
1 - Vehicle App SDK
Learn more about the provided Vehicle App SDK.
Introduction
The Vehicle App SDK consists of the following building blocks:
Vehicle Model Ontology
: The SDK provides a set of model base classes for the creation of vehicle models.
Middleware integration
: Vehicle Models can contain gRPC stubs to communicate with Vehicle Services. gRPC communication is integrated natively.
Fluent query & rule construction
: Based on a concrete Vehicle Model, the SDK is able to generate queries and rules against the KUKSA Databroker to access the real values of the data points that are defined in the vehicle model.
Publish & subscribe messaging
: The SDK supports publishing messages to a MQTT broker and subscribing to topics of a MQTT broker.
Vehicle App abstraction
: Last but not least the SDK provides a VehicleApp base class, which every Vehicle App derives from.
An overview of the Vehicle App SDK and its dependencies is depicted in the following diagram:
Vehicle Model Ontology
The Vehicle Model is a tree-based model where every branch in the tree, including the root, is derived from the Model base class provided by the SDK.
The Vehicle Model Ontology consists of the following classes:
Model
A model contains data points (leaves) and other models (branches).
ModelCollection
Info
The ModelCollection is deprecated since SDK v0.4.0. The generated vehicle model must reflect the actual representation of the data points. Please use the Model base class instead.
Specifications like VSS support a concept that is called
Instances
. It makes it possible to describe repeating definitions. In DTDL, such kind of structures may be modeled with
Relationships
. In the SDK, these structures are mapped with the ModelCollection class. A ModelCollection is a collection of models, which make it possible to reference an individual model either by a NamedRange (e.g., Row [1-3]), a Dictionary (e.g., “Left”, “Right”) or a combination of both.
Service
Direct asynchronous communication between Vehicle Apps and Vehicle Services is facilitated via the
gRPC
protocol.
The SDK has its own Service base class, which provides a convenience API layer to access the exposed methods of exactly one gRPC endpoint of a Vehicle Service or another Vehicle App. Please see the
Middleware Integration
section for more details.
DataPoint
DataPoint is the base class for all data points. It corresponds to sensors/actuators/attributes in VSS or telemetry/properties in DTDL.
Data points are the signals that are typically emitted by Vehicle Services or Data Providers.
The representation of a data point is a path starting with the root model, e.g.:
Vehicle.Speed
Vehicle.FuelLevel
Vehicle.Cabin.Seat.Row1.Pos1.Position
Data points are defined as attributes of the model classes. The attribute name is the name of the data point without its path.
Typed DataPoint classes
Every primitive datatype has a corresponding typed data point class, which is derived from DataPoint (e.g., DataPointInt32, DataPointFloat, DataPointBool, DataPointString, etc.).
Example
An example of a Vehicle Model created with the described ontology is shown below:
## import ontology classesfromsdvimport(DataPointDouble,Model,Service,DataPointInt32,DataPointBool,DataPointArray,DataPointString,)classSeat(Model):def__init__(self,name,parent):super().__init__(parent)self.name=nameself.Position=DataPointBool("Position",self)self.IsOccupied=DataPointBool("IsOccupied",self)self.IsBelted=DataPointBool("IsBelted",self)self.Height=DataPointInt32("Height",self)self.Recline=DataPointInt32("Recline",self)classCabin(Model):def__init__(self,name,parent):super().__init__(parent)self.name=nameself.DriverPosition=DataPointInt32("DriverPosition",self)self.Seat=SeatCollection("Seat",self)classSeatCollection(Model):def__init__(self,name,parent):super().__init__(parent)self.name=nameself.Row1=self.RowType("Row1",self)self.Row2=self.RowType("Row2",self)defRow(self,index:int):ifindex<1orindex>2:raiseIndexError(f"Index {index} is out of range")_options={1:self.Row1,2:self.Row2,}return_options.get(index)classRowType(Model):def__init__(self,name,parent):super().__init__(parent)self.name=nameself.Pos1=Seat("Pos1",self)self.Pos2=Seat("Pos2",self)self.Pos3=Seat("Pos3",self)defPos(self,index:int):ifindex<1orindex>3:raiseIndexError(f"Index {index} is out of range")_options={1:self.Pos1,2:self.Pos2,3:self.Pos3,}return_options.get(index)classVehicleIdentification(Model):def__init__(self,name,parent):super().__init__(parent)self.name=nameself.VIN=DataPointString("VIN",self)self.Model=DataPointString("Model",self)classCurrentLocation(Model):def__init__(self,name,parent):super().__init__(parent)self.name=nameself.Latitude=DataPointDouble("Latitude",self)self.Longitude=DataPointDouble("Longitude",self)self.Timestamp=DataPointString("Timestamp",self)self.Altitude=DataPointDouble("Altitude",self)classVehicle(Model):def__init__(self,name,parent):super().__init__(parent)self.name=nameself.Speed=DataPointFloat("Speed",self)self.CurrentLocation=CurrentLocation("CurrentLocation",self)self.Cabin=Cabin("Cabin",self)vehicle=Vehicle("Vehicle")
Vehicle Services are expected to expose their public endpoints over the gRPC protocol. The related protobuf definitions are used to generate method stubs for the Vehicle Model to make it possible to call the methods of the Vehicle Services.
Model integration
Info
Please be aware that the integration of Vehicle Services into the overall model is not supported by
Based on the .proto files of the Vehicle Services, the protocol buffer compiler generates descriptors for all rpcs, messages, fields etc. for the target language.
The gRPC stubs are wrapped by a convenience layer class derived from Service that contains all the methods of the underlying protocol buffer specification.
Info
The convenience layer of C++ is a bit more extensive than in Python. The complexity of gRPC’s async API is hidden behind individual AsyncGrpcFacade implementations which need to be implemented manually. Have a look at the SeatService of the SeatAdjusterApp example and its SeatServiceAsyncGrpcFacade.
A set of query methods like get(), where(), join() etc. are provided through the Model and DataPoint base classes. These functions make it possible to construct SQL-like queries and subscriptions in a fluent language, which are then transmitted through the gRPC interface to the KUKSA Databroker.
Query examples
The following examples show you how to query data points.
self.rule=(awaitself.vehicle.Cabin.Seat.Row(2).Pos(1).Position.subscribe(self.on_seat_position_change))defon_seat_position_change(self,data:DataPointReply):position=data.get(self.vehicle.Cabin.Seat.Row2.Pos1.Position).valueprint(f'Seat position changed to {position}')# Call to broker# Subscribe(rule="SELECT Vehicle.Cabin.Seat.Row2.Pos1.Position")# If needed, the subscription can be stopped like thisawaitself.rule.subscription.unsubscribe()
autosubscription=subscribeDataPoints(velocitas::QueryBuilder::select(Vehicle.Cabin.Seat.Row(2).Pos(1).Position).build())->onItem([this](auto&&item){onSeatPositionChanged(std::forward<decltype(item)>(item));});// If needed, the subscription can be stopped like this:
subscription->cancel();voidonSeatPositionChanged(constDataPointMap_tdatapoints){logger().info("SeatPosition has changed to: "+datapoints.at(Vehicle.Cabin.Seat.Row(2).Pos(1).Position)->asFloat().get());}
Vehicle.Cabin.Seat.Row(2).Pos(1).Position.where("Cabin.Seat.Row2.Pos1.Position > 50").subscribe(on_seat_position_change)defon_seat_position_change(data:DataPointReply):position=data.get(Vehicle.Cabin.Seat.Row2.Pos1.Position).valueprint(f'Seat position changed to {position}')# Call to broker# Subscribe(rule="SELECT Vehicle.Cabin.Seat.Row2.Pos1.Position WHERE Vehicle.Cabin.Seat.Row2.Pos1.Position > 50")
autoquery=QueryBuilder::select(Vehicle.Cabin.Seat.Row(2).Pos(1).Position).where(vehicle.Cabin.Seat.Row(2).Pos(1).Position).gt(50).build();subscribeDataPoints(query)->onItem([this](auto&&item){onSeatPositionChanged(std::forward<decltype(item)>(item));}));voidonSeatPositionChanged(constDataPointMap_tdatapoints){logger().info("SeatPosition has changed to: "+datapoints.at(Vehicle.Cabin.Seat.Row(2).Pos(1).Position)->asFloat().get());}// Call to broker:
// Subscribe(rule="SELECT Vehicle.Cabin.Seat.Row2.Pos1.Position WHERE Vehicle.Cabin.Seat.Row2.Pos1.Position > 50")
Publish & subscribe messaging
The SDK supports publishing messages to a MQTT broker and subscribing to topics of a MQTT broker. Using the Velocitas SDK, the low-level MQTT communication is abstracted away from the Vehicle App developer. Especially the physical address and port of the MQTT broker is no longer configured in the Vehicle App itself, but rather is set as an environment variable, which is outside of the Vehicle App.
Publish MQTT Messages
MQTT messages can be published easily with the publish_event() method, inherited from VehicleApp base class:
In Python subscriptions to MQTT topics can be easily established with the subscribe_topic() annotation. The annotation needs to be applied to a method of the VehicleApp base class. In C++ the subscribeToTopic() method has to be called. Callbacks for onItem and onError can be set. The following examples provide some more details.
@subscribe_topic("seatadjuster/setPosition/request")asyncdefon_set_position_request_received(self,data:str)->None:data=json.loads(data)logger.info("Set Position Request received: data=%s",data)
#include<fmt/core.h>#include<nlohmann/json.hpp>subscribeToTopic("seatadjuster/setPosition/request")->onItem([this](auto&&item){constautojsonData=nlohmann::json::parse(item);logger().info(fmt::format("Set Position Request received: data={}",jsonData));});
Vehicle App abstraction
Vehicle Apps are inherited from the VehicleApp base class. This enables the Vehicle App to use the Publish & Subscribe messaging and to connect to the KUKSA Databroker.
The Vehicle Model instance is passed to the constructor of the VehicleApp class and should be stored in a member variable (e.g. self.vehicle for Python, std::shared_ptr<Vehicle> m_vehicle; for C++), to be used by all methods within the application.
Finally, the run() method of the VehicleApp class is called to start the Vehicle App and register all MQTT topic and Databroker subscriptions.
Implementation detail
In Python, the subscriptions are based on asyncio, which makes it necessary to call the run() method with an active asyncio event_loop.
A typical skeleton of a Vehicle App looks like this:
Learn about the main concepts and components of the vehicle abstraction and how it relates to the
Eclipse KUKSA project
.
Introduction
The Vehicle Abstraction Layer (VAL) enables access to the systems and functions of a vehicle via a unified - or even better - a standardized Vehicle API abstracting from the details of the end-to-end architecture of the vehicle. The unified API enables Vehicle Apps to run on different vehicle architectures of a single OEM. Vehicle Apps can be even implemented OEM-agnostic, if using an API based on a standard like the
COVESA Vehicle Signal Specification (VSS)
.
The Vehicle API eliminates the need to know the source, destination, and format of signals for the vehicle system.
The Eclipse Velocitas project is using the
Eclipse KUKSA project
.
KUKSA does not provide a concrete VAL. That’s up to you as an OEM (vehicle manufacturer) or as a supplier.
But KUKSA provides the components and tools that helps you to implement a VAL for your chosen end-to-end architecture. Also, it can support you to simulate the vehicle hardware during the development phase of an Vehicle App or Service.
KUKSA provides you with ready-to-use generic components for the signal-based access to the vehicle, like the KUKSA Databroker and the generic Data Providers (aka Data Feeders).
It also provides you reference implementations of certain Vehicle Services, like the Seat Service and the HVAC Service.
The
KUKSA Databroker
is a gRPC service acting as a broker of vehicle data / signals also called data points in the following.
It provides central access to vehicle data points arranged in a - preferably standardized - vehicle data model like the COVESA VSS or others. But this is not a must, it is also possible to use your own (proprietary) vehicle model or to extend the COVESA VSS with your specific extensions via
VSS overlays
.
Data points represent certain states of a vehicle, like the current vehicle speed or the currently applied gear. Data points can represent sensor values like the vehicle speed or engine temperature, actuators like the wiper mode, and immutable attributes of the vehicle like the needed fuel type(s) of the vehicle, engine displacement, maximum power, etc.
Data points factually belonging together are typically arranged in branches and sub-branches of a tree structure (like
this example
on the COVESA VSS site).
The KUKSA Databroker is implemented in Rust, can run in a container and provides services to get data points, update data points and for subscribing to automatic notifications on data point changes.
Filter- and rule-based subscriptions of data points can be used to reduce the number of updates sent to the subscriber.
Data Providers / Data Feeders
Conceptually, a data provider is the responsible to take care for a certain set of data points: It provides updates of sensor data from the vehicle to the Databroker and forwards updates of actuator values to the vehicle. The set of data points a data provider maintains may depend on the network interface (e.g. CAN bus) via that those data is accessible or it can depend on a certain use case the provider is responsible for (like seat control).
Eclipse KUKSA provides several genericData Providers
for different datasources.
As of today, Eclipse Velocitas only utilizes the generic
CAN Provider (KUKSA CAN Provider)
implemented in Python, which reads data from a CAN bus based on mappings specified in e.g. a CAN network description (dbc) file.
The feeder uses a mapping file and data point metadata to convert the source data to data points and injects them into the Databroker using its Collector gRPC interface.
The feeder automatically reconnects to the Databroker in the event that the connection is lost.
Vehicle Services
A vehicle service offers a Vehicle App to interact with the vehicle systems on a RPC-like basis.
It can provide service interfaces to control actuators or to trigger (complex) actions, or provide interfaces to get data.
It communicates with the Hardware Abstraction to execute the underlying services, but may also interact with the Databroker.
The
KUKSA Incubation repository
contains examples illustrating how such kind of vehicle services can be built.
Hardware Abstraction
Data feeders rely on hardware abstraction. Hardware abstraction is project/platform specific.
The reference implementation relies on SocketCAN and vxcan, see
KUKSA CAN Provider
.
The hardware abstraction may offer replaying (e.g., CAN) data from a file (can dump file) when the respective data source (e.g., CAN) is not available.
Information Flow
The VAL offers an information flow between vehicle networks and vehicle services.
The data that can flow is ultimately limited to the data available through the Hardware Abstraction, which is platform/project-specific.
The KUKSA Databroker offers read/subscribe access to data points based on a gRPC service. The data points which are actually available are defined by the set of feeders providing the data into the broker.
Services (like the
seat service
) define which CAN signals they listen to and which CAN signals they send themselves, see
documentation
.
Service implementations may also interact as feeders with the Databroker.
Data flow when a Vehicle App uses the KUKSA Databroker
Data flow when a Vehicle App uses a Vehicle Service
Source Code
Source code and build instructions are available in the respective KUKSA repositories:
This provides a style guide for .proto files. By following these conventions, you’ll make your protocol buffer message definitions and their corresponding classes consistent and easy to read.
Unless otherwise indicated, this style guide is based on the style guide from
google protocol-buffers style
under Apache 2.0 License & Creative Commons Attribution 4.0 License.
Note that protocol buffer style can evolve over time, so it is likely that you will see .proto files written in different conventions or styles. Please respect the existing style when you modify these files. Consistency is key. However, it is best to adopt the current best style when you are creating a new .proto file.
Standard file formatting
Keep the line length to 80 characters.
Use an indent of 2 spaces.
Prefer the use of double quotes for strings.
File structure
Files should be named lower_snake_case.proto
All files should be ordered in the following manner:
License header
File overview
Syntax
Package
Imports (sorted)
File options
Everything else
Directory Structure
Files should be stored in a directory structure that matches their package sub-names. All files
in a given directory should be in the same package.
Below is an example based on the
proto files
in the kuksa.-databroker repository.
| proto/
| └── sdv
| └── databroker
| └── v1 // package sdv.databroker.broker.v1
| ├── broker.proto // service Broker in sdv.databroker.broker.v1
| ├── collector.proto // service Collector in sdv.databroker.broker.v1
| └── types.proto // type definition and import of in sdv.databroker.broker.v1
Package names should be in lowercase. Package names should have unique names based on the project name, and possibly based on the path of the file containing the protocol buffer type definitions.
Message and field names
Use PascalCase (CamelCase with an initial capital) for message names – for example, SongServerRequest. Use underscore_separated_names for field names (including oneof field and extension names) – for example, song_name.
If your field name contains a number, the number should appear after the letter instead of after the underscore. For example, use song_name1 instead of song_name_1
Repeated fields
All API interfaces must provide a major version number, which is encoded at the end of the protobuf package.
If an API introduces a breaking change, such as removing or renaming a field, it must increment its API version number to ensure that existing user code does not suddenly break.
Note: The use of the term “major version number” above is taken from semantic versioning. However, unlike in traditional semantic versioning, APIs must not expose minor or patch version numbers.
For example, APIs use v1, not v1.0, v1.1, or v1.4.2. From a user’s perspective, minor versions are updated in place, and users receive new functionality without migration.
A new major version of an API must not depend on a previous major version of the same API. An API may depend on other APIs, with an expectation that the caller understands the dependency and stability risk associated with those APIs. In this scenario, a stable API version must only depend on stable versions of other APIs.
Different versions of the same API should preferably be able to work at the same time within a single client application for a reasonable transition period. This time period allows the client to transition smoothly to the newer version. An older version must go through a reasonable, well-communicated deprecation period before being shut down.
For releases that have alpha or beta stability, APIs must append the stability level after the major version number in the protobuf package.
Release-based versioning
An individual release is an alpha or beta release that is expected to be available for a limited time period before its functionality is incorporated into the stable channel, after which the individual release will be shut down.
When using release-based versioning strategy, an API may have any number of individual releases at each stability level.
Alpha and beta releases must have their stability level appended to the version, followed by an incrementing release number. For example, v1beta1 or v1alpha5. APIs should document the chronological order of these versions in their documentation (such as comments).
Each alpha or beta release may be updated in place with backwards-compatible changes. For beta releases, backwards-incompatible updates should be made by incrementing the release number and publishing a new release with the change. For example, if the current version is v1beta1, then v1beta2 is released next.
The gRPC protocol is designed to support services that change over time. Generally, additions to gRPC services and methods are non-breaking. Non-breaking changes allow existing clients to continue working without changes. Changing or deleting gRPC services are breaking changes. When gRPC services have breaking changes, clients using that service have to be updated and redeployed.
Making non-breaking changes to a service has a number of benefits:
Existing clients continue to run.
Avoids work involved with notifying clients of breaking changes, and updating them.
Only one version of the service needs to be documented and maintained.
Non-breaking changes
These changes are non-breaking at a gRPC protocol level and binary level.
Adding a new service
Adding a new method to a service
Adding a field to a request message - Fields added to a request message are deserialized with the default value on the server when not set. To be a non-breaking change, the service must succeed when the new field isn’t set by older clients.
Adding a field to a response message - Fields added to a response message are deserialized into the message’s unknown fields collection on the client.
Adding a value to an enum - Enums are serialized as a numeric value. New enum values are deserialized on the client to the enum value without an enum name. To be a non-breaking change, older clients must run correctly when receiving the new enum value.
Binary breaking changes
The following changes are non-breaking at a gRPC protocol level, but the client needs to be updated if it upgrades to the latest .proto contract. Binary compatibility is important if you plan to publish a gRPC library.
Removing a field - Values from a removed field are deserialized to a message’s unknown fields. This isn’t a gRPC protocol breaking change, but the client needs to be updated if it upgrades to the latest contract. It’s important that a removed field number isn’t accidentally reused in the future. To ensure this doesn’t happen, specify deleted field numbers and names on the message using Protobuf’s reserved keyword.
Renaming a message - Message names aren’t typically sent on the network, so this isn’t a gRPC protocol breaking change. The client will need to be updated if it upgrades to the latest contract. One situation where message names are sent on the network is with Any fields, when the message name is used to identify the message type.
Nesting or unnesting a message - Message types can be nested. Nesting or unnesting a message changes its message name. Changing how a message type is nested has the same impact on compatibility as renaming.
Protocol breaking changes
The following items are protocol and binary breaking changes:
Renaming a field - With Protobuf content, the field names are only used in generated code. The field number is used to identify fields on the network. Renaming a field isn’t a protocol breaking change for Protobuf. However, if a server is using JSON content, then renaming a field is a breaking change.
Changing a field data type - Changing a field’s data type to an incompatible type will cause errors when deserializing the message. Even if the new data type is compatible, it’s likely the client needs to be updated to support the new type if it upgrades to the latest contract.
Changing a field number - With Protobuf payloads, the field number is used to identify fields on the network.
Renaming a package, service or method - gRPC uses the package name, service name, and method name to build the URL. The client gets an UNIMPLEMENTED status from the server.
Removing a service or method - The client gets an UNIMPLEMENTED status from the server when calling the removed method.
Behavior breaking changes
When making non-breaking changes, you must also consider whether older clients can continue working with the new service behavior. For example, adding a new field to a request message:
Isn’t a protocol breaking change.
Returning an error status on the server if the new field isn’t set makes it a breaking change for old clients.
Behavior compatibility is determined by your app-specific code.
The framework for drafting error messages could be useful as a later improvement. This could e.g., be used to specify which unit created the error message and to assure the same structure on all messages. The latter two may e.g., depend on debug settings, e.g., error details only in debug-builds to avoid leaks of sensitive information. A global function like below or similar could handle that and also possibly convert between internal error codes and gRPC codes.
grpc::Statusstatus=CreateStatusMessage(PERMISSION_DENIED,"DataBroker","Rule access rights violated");
SDV error handling for gRPC interfaces (e.g., VAL vehicles services)
Use gRPC error codes as base
Document in proto files (as comments) which error codes that the service implementation can emit and the meaning of them. (Errors that only are emitted by the gRPC framework do not need to be listed.)
Do not - unless there are special reasons - add explicit error/status fields to rpc return messages.
Additional error information can be given by free text fields in gRPC error codes. Note, however, that sensitive information like Given password ABCD does not match expected password EFGH should not be passed in an unprotected/unencrypted manner.
SDV handling of gRPC error codes
The table below gives error code guidelines for each gRPC on:
If it is relevant for a client to retry the call or not when receiving the error code. Retry is only relevant if the error is of a temporary nature.
When to use the error code when implementing a service.
gRPC error code
Retry Relevant?
Recommended SDV usage
OK
No
Mandatory error code if operation succeeded. Shall never be used if operation failed.
CANCELLED
No
No explicit use case on server side in SDV identified
UNKNOWN
No
To be used in default-statements when converting errors from e.g., Broker-errors to SDV/gRPC errors
INVALID_ARGUMENT
No
E.g., Rule syntax with errors
DEADLINE_EXCEEDED
Yes
Only applicable for asynchronous services, i.e. services which wait for completion before the result is returned. The behavior if an operation cannot finish within expected time must be defined. Two options exist. One is to return this error after e.g., X seconds. Another is that the server never gives up, but rather waits for the client to cancel the operation.
NOT_FOUND
No
Long term situation that likely not will change in the near future. Example: SDV can not find the specified resource (e.g., no path to get data for specified seat)
ALREADY_EXISTS
No
No explicit use case on server side in SDV identified
PERMISSION_DENIED
No
Operation rejected due to permission denied
RESOURCE_EXHAUSTED
Yes
Possibly if e.g., malloc fails or similar errors.
FAILED_PRECONDITION
Yes
Could be returned if e.g., operation is rejected due to safety reasons. (E.g., vehicle moving)
ABORTED
Yes
Could e.g., be returned if service does not support concurrent requests, and there is already either a related operation ongoing or the operation is aborted due to a newer request received. Could also be used if an operation is aborted on user/driver request, e.g., physical button in vehicle pressed.
OUT_OF_RANGE
No
E.g., Arguments out of range
UNIMPLEMENTED
No
To be used if certain use-cases of the service are not implemented, e.g., if recline cannot be adjusted
INTERNAL
No
Internal errors, like exceptions, unexpected null pointers and similar
UNAVAILABLE
Yes
To be used if the service is temporarily unavailable, e.g., during system startup.
DATA_LOSS
No
No explicit use case identified on server side in SDV.
UNAUTHENTICATED
No
No explicit use case identified on server side in SDV.
The AppManifest defines the properties of your Vehicle App and its functional interfaces (FIs).
FIs may be:
required service interfaces (e.g. a required gRPC service interface)
the used vehicle model and accessed data points.
an arbitrary abstract interface description used by 3rd parties
In addition to requiredFIs, providedFIs can (and need) to be specified as well.
These defined interfaces are then used by the Velocitas toolchain to:
generate service stubs for either a client implementation (required IF) or a server implementation (provided IF) (i.e. for gRPC)
generate a source code equivalent of the defined vehicle model
Overview
The image below depicts the interaction between App Manifest and DevEnv Configuration at -development time- The responsibilities are clearly separated; the App Manifest describes the application and its interfaces whereas DevEnv Configuration (or .velocitas.json) defines the configuration of the development environment and all the packages used by the Velocitas toolchain.
Context
To fully understand the AppManifest, let’s have a look at who interacts with it:
Purpose
Define the requirements of a Vehicle App in an abstract way to avoid dependencies on concrete Runtime and Middleware configurations.
Description of your applications functional interfaces(VehicleModel, services, APIs, …)
Enable loose coupling of functional interface descriptions and the Velocitas toolchain. Some parts of the toolchain are responsible for reading the file and acting upon it, depending on the type of functional interface
Providing an extendable syntax to enable custom functional interface types which may not provided by the Velocitas toolchain itself, but by a third party
Providing a single source of truth for generation of deployment specifications (i.e. Kanto spec, etc…)
interface towards our generated Vehicle Signal Interface based on the
COVESA Vehicle Signal Specification
. In particular, it requires read access to the vehicle signal Vehicle.Speed since the signal is marked as optional the application will work, even if the signal is not present in the system. Additionally, the application acts as a provider for the signal Vehicle.Cabin.Seat.Row1.Pos1.Position meaning that it will take responsibility of reading/writing data directly to vehicle networks for the respective signal.
interface towards gRPC based on the seats.proto file. Since the direction is required a service client for the seats service will be generated who interacts with the Velocitas middleware.
interface towards the pubsub middleware and is reading to the topic sampleapp/getSpeed and writing the topics sampleapp/currentSpeed, sampleapp/getSpeed/response.
The example has no provided interfaces.
Structure
Describes all external properties and interfaces of a Vehicle Application.
Support for additional interface types may be added by providing a 3rd party
CLI package
.
Planned, but not yet available features
Some FIs are dependent on used classes, methods or literals in your Vehicle App’s source code. For example the vehicle-model FI requires you to list required or provided datapoints. At the moment, these attributes need to be filled manually. There are ideas to auto-generate these attributes by analyzing the source code, but nothing is planned for that, yet.
The functional interface for providing vehicle signal access via VSS specification.
Providing CLI package
Interface type-key
devenv-devcontainer-setup
vehicle-signal-interface
The Vehicle Signal Interface formerly known as
Vehicle Model
interface type creates an interface to a signal interface described by the VSS spec. This interface will generate a source code package equivalent to the contents of your VSS JSON automatically upon devContainer creation.
If a Vehicle App requires a vehicle-signal-interface, it will act as a consumer of datapoints already available in the system. If, on the other hand, a Vehicle App provides a vehicle-signal-interface, it will act as a provider (formerly feeder in KUKSA terms) of the declared datapoints.
Furthermore, in the source code generated by this functional interface, a connection to
KUKSA Databroker
will be established via the configured Velocitas middleware. It uses the broker.proto if provided by the KUKSA Databroker to connect via gRPC. A seperate declaration of a grpc-interface for the databroker is NOT required.
The model generation is supported for VSS versions up to v4.0. There are some changes for some paths from v3.0 to v4.0. For example Vehicle.Cabin.Seat.Row1.Pos1.Position in v3.0 is Vehicle.Cabin.Seat.Row1.DriverSide.Position in v4.0. If you are using the mock provider you would need to take that into account when you sepcify your mock.py.
3.1.2 - gRPC Service Interface
The functional interface for supporting remote procedure calls via gRPC.
Providing CLI package
Interface type-key
devenv-devcontainer-setup
grpc-interface
Description
This interface type introduces a dependency to a gRPC service. It is used to generate either client stubs (in case your application requires the interface) or server stubs (in case your application provides the interface). The result of the generation is a language specific and package manager specific source code package, integrated with the Velocitas SDK core.
If a Vehicle App requires a grpc-interface - a client stub embedded into the Velocitas framework will be generated and added as a build-time dependency of your application. It enables you to access your service from your Vehicle App without any additional effort.
If a Vehicle App provides a grpc-interface - a server stub embedded into the Velocitas framework will be generated and added as a build-time dependency of your application. It enables you to quickly add the business logic of your application.
URI of the used protobuf specification of the service. URI may point to a local file or to a file provided by a server. It is generally recommended that a stable proto file is used. I.e. one that is already released under a proper tag rather than an in-development proto file.
required.methods
array
Array of service’s methods that are accessed by the application. In addition to access control the methods attribute may be used to determine backward or forward compatibility i.e. if semantics of a service’s interface did not change but methods were added or removed in a future version.
required.methods.[].name
string
"Move", "MoveComponent"
Name of the method that the application would like to access
provided
object
{}
Reserved object indicating that the interface is provided. Might be filled with further configuration values.
You need to specify devenv-devcontainer-setup >= v2.4.2 in your project configuration. Therefore your .veloitas.json should look similair to this example:
To do that you can run velocitas component add grpc-interface-support when your package is above or equal to v2.4.2
3.1.3 - Publish Subscribe
The functional interface for supporting communication via publish and subscribe.
Providing CLI package
Interface type-key
devenv-runtimes
pubsub
Description
This interface type introduces a dependency to a publish and subscribe middleware. While this may change in the future due to new middlewares being adopted, at the moment this will always indicate a dependency to MQTT.
If a Vehicle App requires pubsub - this will influence the generated deployment specs to include a publish and subscribe broker (i.e. an MQTT broker).
If a Vehicle App provides pubsub - this will influence the generated deployment specs to include a publish and subscribe broker (i.e. an MQTT broker).
Configuration structure
Attribute
Type
Example value
Description
reads
array[string]
[ "sampleapp/getSpeed" ]
Array of topics which are read by the application.