2 - Dress Me Up System


Dynamic Diagrams

Flowcharts / components


graph TD subgraph Legend Process[Process]:::process Input[/Input/]:::input Output[\Output\]:::output Terminal(Terminal):::terminal SubProcess[[Subprocess]] Document>Document] ManualProcess[\Manual Process/] Database[(Database)] end
graph TD subgraph DressMeUp API bwFiles[(BW files of garments)] userAvatar>UserAvatar] userAvatar -.-> videoComp[[Video Composition]] bwFiles -.-> videoComp end subgraph DressMeUp Mobile App signin(2-1 Sign in):::usecase signin --> bodyScan[[2-2-x Body Scan]]:::usecase bodyScan --> selectGarment[/2-2-2 Select a garment/]:::usecase bodyScan -.->|Upload| userAvatar selectGarment --> selectVideo[/2-2-3 Select a video/]:::usecase selectVideo --> upload[2-2-3 Upload]:::usecase upload --> videoComp upload -.-> userVideo>UserVideo] -.-> videoComp videoComp --> download(Download Composed Video) end subgraph DressMeUpAdmin catalogueApi[Read Odlo catalogue API] bwFiles & catalogueApi --> associateIds[Associate garment IDs with SKUs] end associateIds -.-> selectGarment

Photo composition

graph TD start(Start) --> posePrediction[Pose Prediction] posePrediction --> refinePose[\Refine Pose/] refinePose --> autoDress[Autodress garment on
avatar in A pose] estimateDressingLandmarks[Estimate dressing landmarks] --> autoDress autoDress --> simulateToPose[Simulate To Pose] estimateLighting[Estimate lighting] & simulateToPose --> renderGarment[Render Garment] renderGarment --> registerRender[Register render] registerRender --> composeRender[Compose render into user photo] composeRender --- adjustments[ML Adjustments] composeRender --> finish(Finish) refinePose --> warpPhoto[Warp Photo to match avatar] warpPhoto --> composeRender userPhoto[/User Photo/]:::input -.-> posePrediction userAvatar[/User Avatar/]:::input -.-> autoDress composeRender -.-> composedVideo[\Composed Photo\] userAvatar -.-> estimateDressingLandmarks garmentModel>Garment Model] -.-> autoDress userPhoto -.-> estimateLighting

Video composition

graph TD start(Start) --> posePrediction[Pose Sequence Prediction] posePrediction --> createAnimation[Create animation] createAnimation --> refineAnimation[\Refine Animation/] refineAnimation --> autoDress[Autodress garment on avatar in A pose] estimateDressingLandmarks[Estimate dressing landmarks] --> autoDress autoDress --> simulateAnimation[Simulate animation] estimateLighting[Estimate lighting] & simulateAnimation --> renderSimulation[Render simulation] renderSimulation --> registerRender[Register render into scene] predictOcclusion[Predict occlusion] registerRender & predictOcclusion --> composeRender[Compose render into user video] composeRender --> finish(Finish) userVideo[/User Video/]:::input -.-> posePrediction userAvatar[/User Avatar/]:::input -.-> autoDress userAvatar -.-> createAnimation composeRender -.-> composedVideo[\Composed Video\] userAvatar -.-> estimateDressingLandmarks garmentModel>Garment Model] -.-> autoDress userVideo -.-> estimateLighting & predictOcclusion

Open issues & questions

  • Mobile app framework tech choice. e.g. AWS Amplify? GCP Firebase? Decision: Firebase
  • Is the consumer feedback service going to be used here? It’s in the October 2020 diagram.
  • Post-processing of QC body shape needed before storing personal avatar in pod? Added.
  • What to do with derived data, permissions vs Ts&Cs
  • How will we auth the end user for the Mallzee and QC services if the client is developed in firebase, but the server implementation is in AWS?
  • How are garments added? In what format? Added.
  • Will all of these garments be available in ecommerce feed? No - there are ambassador capsule collections as well.
  • Does the mobile app require any discovery at all? Or just id lookup / deep linking? Yes, product discovery is part of the Ux. The queries will be done directly against firestore db.