Xara should make use of face and voice recognition features using the camera and microphone present on many computers and use AI to process the facial expressions of the users, the glint in their eye and the expletives they utter.
Instead of making user troll through the template and tool menus, the camera should follow the user and the software should generate pages accordingly.
For example, the user could say "Hey Xara, I need a photography website that works on any platform", if the user doesn't look happy or expresses corrective phrases such as "No F***N way will I use that", or "I don't want them flowery colours, I want to make my users miserable", then the software will adjust the generated pages to suit and look and listen again.
Naturally, nuanced expressions such as tears will indicate dissatisfaction and the software will then correlate the expression with the users posts on talkgraphics, so multiple expressions of dissatisfaction on TG will tweak the algorithm to generate more tears.
Similarly, element adjustment will voice-controlled. So, once the basic layout is approved (no tears detected) the user will be able to populate and nuance the design. For example, the user might say "Hey Xara, ditch that Lorem ipsum header rubbish and replace it with "Weelcome to my website, press ENTER to continue".
The new system will integrate with Xara cloud, so in the design phase the user can say "Hey, Xara Cloud, don't let the bloomin' users mess with my picture on column 3, give them the finger if they try and change it."
These enhancements will increase uptake of Xara software no end.
Hey Xara, you listening?
Bookmarks