SensorToolkit is a powerful abstraction over the built-in Unity functions like Raycasts, Overlaps and Trigger Colliders. You can add standalone sensor components to your game objects and configure them as needed. The sensors can be queried to determine what they detect and provide additional information, such as the visibility or shape of the target.
Upgrading from v1
• Works in both 2D and 3D.
• Many sensor types available: Raycasting, Overlaps, Steering, Line of Sight and more.
• Lightweight and modular. Sensors are independent components that feel native to Unity.
• Easy to integrate with your project: simply add a sensor component and configure it.
• Flexible and unopinionated, allowing you to design your game as desired.
• Many filtering options and querying functions for precise detection.
• Capable of detecting individual colliders or rigid bodies made up of multiple colliders.
• Advanced line-of-sight implementation can calculate partial visibility.
• Performance is a key focus, with many options available to control performance.
• Test sensors in the editor and confirm their configurations before running the game.
• Upgraded steering behaviour based on 'Context-Based Steering' method.
• All source code is included.
• Zero garbage generated.
Who is this for:
SensorToolkit is designed to be user-friendly and easy to use, regardless of your level of programming experience. Non-programmers will appreciate the ability to create complex detection behaviors without writing code, as the kit is integrated with Playmaker and additional integrations are planned. Programmers will find SensorToolkit to be a powerful foundation for building AI behaviors, as it manages the complexities of object detection and provides a clean and simple interface. The widgets and debugging inspectors also help you quickly identify any configuration issues, allowing you to reduce boilerplate and keep your code organized and efficient.
- Behavior Designer
- Game Creator 2
- Adventure Creator
But detecting things is easy?
Sometimes, I'm asked why to use this asset when it's easy to call the functions for raycasts and overlaps directly. While it is true that these are simple tasks, that is not the only value that SensorToolkit aims to bring. The main challenges that this asset aims to solve are as follows:
• Comprehensive coverage of the raycast, overlap, and trigger collider functionality.
• Debug widgets that show the exact shape of a sensor's detection range.
• Ability to examine what a sensor detects during runtime or in the editor, saving time and effort in debugging by clearly showing whether a sensor can detect an object or not.
• Powerful implementation for line-of-sight testing, with a lot of effort put into this sensor.
• Robust event-driven architecture that allows you to change all sensor settings at runtime without encountering bugs.
• Sensors can detect objects made up of many sub-objects, such as a rigid body with multiple colliders on child objects. The sensors can intelligently map many input objects to a single output detection, with high configurability.
• Convenience functions that return lists or can be enumerated, without generating garbage.
• Sensors can be plugged into each other to build up complex detection behaviour.
• Much flexibility is provided to control the performance impact of the sensors at runtime.
Difference from v1:
This is a significant update from the first version of SensorToolkit. While it may seem familiar, there are a significant number of changes. The first version's code was difficult to extend, making it hard to add new sensors or extend existing ones without blowing out complexity. SensorToolkit 2, on the other hand, has a new architecture and philosophy that makes it easier to add new features. Some of the major new features include:
• Sensors capture the bounding box of detected objects, allowing for easy targeting of an object's center of mass.
• Ray sensor supports all Physics.Raycast shapes including 'ray', 'sphere', 'box' and 'capsule'.
• Range sensor supports all Physics.Overlap shapes including 'sphere', 'box' and 'capsule'.
• Sensors do not implement the 'Update' method, meaning that if a sensor is not pulsing, it has no performance impact when this function is called each frame. This is especially useful for the TriggerSensor, allowing you to create many static trigger zones with good scalability while still being able to use the toolkit's comprehensive API.
• Line of Sight is now a separate sensor. It's been significantly extended, including the ability to smooth visibility scores over multiple frames, so it's possible to calculate fractional visibility using a single ray per pulse. It can scale visibility by distance and view angle so it's no longer necesassary to use a FOVCollider.
• Sensor pulses are randomly staggered so they don't fall on the same frame when they have the same pulse interval.
• Steering sensor that replaces the old Steering Rig and is based on the more advanced \"Context Based Steering\" method, which should work better and be easier to use.
• New arc sensor that allows for raycasts over a parabola.
• New boolean sensor lets you combine the results of multiple other sensors. For example: 'What are the things that I can hear and also see'.
• Ability to inject custom code into sensor detection logic for custom filtering needs.
• New Navmesh sensor for analyzing a navmesh.
• More robust codebase that has eliminated all outstanding bugs.