Fairly recently, marketers have identified that objects and packagings do not define us, but describe us pretty well. If I know you are using this object, then I know of plenty of things that you will probably also like a lot. We had been looking for ways to pop-up these things that you might like out of the objects you use, like a genius popping out of a lamp when you swipe it! We found it: just look at a given object with your app, and see the magic that has been prepared for you! It can be services that you may find of interest if you have a given disease, a 3D instruction for use or a chatbot replacing a paper leaflet, a community to talk to that you don’t know how to reach...
Whatever the magic, one crucial point is to test what kind of magic will be proposed to users, to see how well they will like it. More than on any other digital content, there is a need to do A/B testing on what appears on the screen, and to trigger it with the mere scanning: Hack:Pack is the demo app that enables this. With Hack:Pack, you can rapidly prototype a full scenario of object recognition + trigger a digital content and go a for a field study, all in a matter of days!
Whatever the magic, one crucial point is to test what kind of magic will be proposed to users, to see how well they will like it. More than on any other digital content, there is a need to do A/B testing on what appears on the screen, and to trigger it with the mere scanning: Hackpack is the demo app that enable this. With Hackpack, you can rapidly prototype a full scenario of object recognition + trigger a digital content and go a far a field study, all in a matter a days!
How do we handle your personal data?
Very simple, it does not! Hack:Pack is an app with no login, no password. The claim is that objects around you are enough to describe and profile you. If you scan a pack of insulin, knowing your age, gender or weight and - of course - email, does not bring me more than just knowing that you are probably diabetic, and this is enough to build relevant attractive digital content.
By definition, any trigger you had to the Hack:Pack app will be recognized by the community and appear in the library of available triggers. If you do not want to have your triggers visible to the world, then - and only then - do you need to create an account to “privatize” your triggers. You’ll only be asked a login, password and email for password retrieval. We are not using it for any other purpose, and - of course - delete it on demand.
Use Hack:Pack to effortlessly enrich the digital experience of any objects! Just go to "How to add a new trigger" in the Hack:Pack app, and follow the step-by-step guidelines:
Step 1 Record 1 min videos of the object to be recogonized by the Hack:Pack scanner:
Step 2 Upload your video to the Hack:Pack cloud for processing:
Step 3 Use Hack:Pack to effortlessly enrich the digital experience of any objects! Just go to "How to add a new trigger" in the Hack:Pack app, and follow the step-by-step guidelines:
Hack:Pack automates the process of 2D/3D object recognition using leading edge Al technology named 'Deep Learning'. When humans look at a photograph or watch a video, we can readily spot people, objects, scenes, and visual details. The goal is to teach a computer to do what comes naturally to humans: to gain a level of understanding of what an image contains.
Now a bit of vocabulary:
Hack:Pack automates the process of 2D/3D object recognition using leading edge AI technology named « deep learning »
When humans look at a photograph or watch a video, we can readily spot people, objects, scenes, and visual details. The goal is to teach a computer to do what comes naturally to humans: to gain a level of understanding of what an image contains.
CNN BASED RECOGNITION
Deep learning models such as convolutional neural networks, or CNNs, are used to automatically learn an object’s inherent features in order to identify that object.
Hack:Pack’s CNN can learn to identify differences between your products / objects by analysing thousands of training images and learning the features that make your products different.
READY TO VISIBLE YOUR PRODUCT?
This is why large quantities of images of your product must be uploaded onto the HackPack.fr platform prior to recognition.
No worries! A 1 min video contains 1440 images! This is good enough of a dataset to reach decent recognition!
Visit the « How to add a trigger » section to get guidance on how to optimize the recording of your training datasets with your smartphone.
An image or a 3D object that you wish to be able to recognize with the Hack:Pack app is named a “trigger”. Triggers are named triggers because they provoke the display on the screen a content, independently of any action from the user.
Hack:Pack’s machine learning algorithm (also known as the “model”) is being “trained” to recognize said trigger via videos of the trigger, taken via the Hack:Pack app. Once the videos are uploaded to the Hack:Pack cloud, this “training of the model” takes 4-5 hours, and is done on any new trigger twice a day, at noon and at midnight GMT.
The content that is displayed upon scanning a know trigger is named the “overlay”, and is a digital scenario made of images, sound, 3D content, videos...etc. This content can be created and assembled 100% outside of the Hack:Pack and then imported. The Hack:Pack app offers a minimum set of editing function to make very basic overlays.
Step 1 Record 1 min videos of the objects to be recogonized by the Hack:Pack scanner
Step 2 Upload your videos to the Hack:Pack cloud for processing
Best results seem to be obtained with 2 or 3 videos, 1 min each, recorded with your mobile directly from the Hack:Pack app. After computational crunch and model training, this should give ca. 2000-3000 images, which is a good target. There is also a possibility to add extra videos from your personal library (both on the mobile and on the Hack:Pack web platform), but - alone - they tend to give a recognition quality which is not as good as the one obtained with in-app recorded videos
"IMPORTANT: only Public triggers are visible in the library! These are triggers created by anonymous users. They cannot be modified or enriched once created, even by their author. If you wish to keep your trigger invisible to the community and/or later fine tune its overlay, you'll have to create a login/password, to be able to access the Hack:Pack web platform"
The Library of triggers is the list of 3D objects that the Hack:Pack scanner has been trained to recognize. It contains public triggers, i.e. public triggers that anonymous users have decided to make visible to everybody.
Many more triggers than those present in the public library can be actually recognized by the Hack:Pack scanner: private triggers, which have been created by non anonymous users, will be recognized, but you won’t know they are - except if you get this information via a different channel, and/or if you try to scan everything you come across!
The Hack:Pack library contains a lot of triggers, and grows every day. The Hack:Pack machine learning algorithm is robust enough to be educated with millions of triggers, and not confuse them!
Triggers are “crunched” by our machine learning algorithm twice a day, at noon and midnight GMT. Every crunch takes a few hours
You may do this either directly from the app as you create the trigger. The UX is pretty limited in that case, just a redirection to a URL. This can be enough to test the quality of recognition of the trigger itself. If you want to attach an overlay with a full scenario, privatize your trigger in order to be able to retrieve it on the Hack:Pack web-platform: there are many more options on this web-platform to create engaging scenarios, with the inclusion of timed sequences embedding sound, video, 3D objects...etc. You’ll need the login and password you’ll have created on the app to be able to access the Hack:Pack web platform: https://hackpack.beeyond.fr.
Nope. Whenever you’re ready, just let us know and we’ll provide you with the SDK, so that you can embed the functionalities (and trigger libraries) of Hack:Pack in your own app!
Yes, the use of Hack:Pack for building demos is totally free, regardless of the number of triggers and connexions. We are renting the SDK for a fee + a share of the value created by the scans when you integrate the Hack:Pack functionalities in your app.