Mini Project - Photogrammetry Experiment
06/03/2025
The aim of this project was to experiment with photogrammetry and develop a 3D character asset which we could potentially use in our upcoming final project and help determine our preferred pathway.
I was unsure of how capable I was of developing 3D assets so making a 3D game seemed completely out of the question for me. However, this mini project was incredibly helpful for me to get a better understanding of how realistically achievable a 3D project could be regardless if it was something I would take interest in or not this time around.
Choosing a Design
Before I could begin the project, I needed a design that I could use for this experiment. Due to some personal complications that hindered me from being able to complete an original design that I could use, I ended up looking through the "Oddworld: Abe's Oddysee" concept art book to see if I could find a design I enjoyed the look of.
There was a particular page that caught my attention. It was filled with these cute, small alien-like creatures.
I naturally gravitated towards this particular creature on the right as it is quite adorable. I quite liked the look of its big, bulgy eyeballs in contrast to its small, chunky body.
The way its mouth is shaped is intriguing and it was overall a design that I became attached to. I decided to move forward with it and model my experiment after it, though of course I intended to make small changes to it as I worked through the project.
Moulding the Paper Model
The next step after picking a design was to develop a physical model that we could use for the photogrammetry portion of the assignment.
When making my model, I opted to not use the plasticine provided to us as I felt like it was quite easy to make a mess while sculpting it which I wasn't too keen on, not to mention I personally don't think I am quite good at moulding it as it is.
I find the material to be quite physically hard to mould as the plasticine is usually quite cold and rigid.
Instead I decided to scrunch up newspaper which was secured together by a ridiculously large amount of tape. My goal was to make sure my model had a decent amount of volume to it without it being incredibly heavy.
I ended up using quite a bit of newspaper to make the main body, mouth and tail.
Once I was happy with the overall shape, I did a few rounds of tape around the model to ensure it would maintain its shape, at least for the duration of our shoot.
The picture on the right is the outcome of this session, which took under thirty (30) minutes to complete. That being said, I think I did a pretty adequate job for my first time attempting to make a model out of paper and I am really pleased with how I tackled the situation despite my lack of experience in this area.
Photograph Segment
Once the paper model was complete, I moved over to the studio to set up the equipment to begin the photogrammetry process.
I mounted my model so that I could take pictures from every possible angle, particularly below it.
The green curtains allow the program to be able to accurately distinguish the background from the actual model so that the digital model comes out as accurately and cleanly as possible.
I was advised to also mark the essential features of the model with a marker so that the program would be able to process those prominent features more accurately and avoid losing major parts of the model.
Along with that, I set up the lighting so that the camera had a clear view of the model at all times as shadows often make it so that the program has a harder time processing the model as faithfully as it should.
I also had access to a tripod but I ended up opting not to use it as I felt more comfortable holding the camera myself and moving freely when it came to shooting different angles.
Taking pictures isn't at all a taxing task for me personally so I believe I was able to speed through it quite nicely whilst still taking the time to take each picture carefully.
There was a small issue with the camera not focusing as it should which ended up being an error with the mode it was set in but it was quickly fixed by our technician. The process took no more than thirty (30) minutes also and was quite enjoyable.
Texture Painting Process
After receiving the object file, I loaded it up onto Blender. Thankfully, the mesh came along quite nicely and I did not have to spend too much time cleaning it up. It took roughly about fifteen (15) minutes to get the model all cleaned up and ready for the texture painting portion.
I decided to take inspiration from one of my favourite childhood games titled "Spore". It is self-described as a life simulation, real-time strategy game developed by Maxis and published by EA (Electronic Arts) in 2008. You essentially get to create your own species, you customise not only its appearance but the path it takes during the many stages of evolution.
I started off the process by choosing the base colour. I opted to go with green as it is a colour typically associated with alien-like creatures. I also began by accentuating the areas that would need to be shaded as they protrude out.
Next, I started adding the key features of the design. I opted to only give the creature one singular, large eyeball as many creatures from "Spore" share this aspect about their design, especially in the early stages of evolution. I made sure that the eyelids were rendered in a way to make it seem quite pronounced to give a more unsettling look.
Adding the details was quite fun though the limitations in terms of which brushes you have available to you within Blender were pretty limiting. I found it a little hard to make interesting marks on the skin as it was hard to achieve the texture I was trying to go for, or at the very least, experiment with any textures at all.
Despite this, I attempted to add some marking to the alien's skin as well as shading the rest of the body to make it come to life. I attempted to recreate how "Spore" draws its character's irises in some of its eye assets.
The screenshot below shows my final outcome of the model was able to create. The texture painting was the longest portion of the process, mainly because I was experimenting with colours. The process took roughly an hour to complete.
Reflection
Overall, the process of using photogrammetry was something I really enjoyed trying out and I feel like I was able to produce a satisfying final outcome. Trying something new and coming out of my comfort zone can be challenging at times but I am thankful to have had the opportunity to have tried it.
However, I cannot see myself using this method to produce assets for my final project long-term.
This is for a variety of reasons, namely how I do not feel like I can realistically book the studio enough times to produce every single asset that I may need and not to mention I just cannot see myself making a 3D game based on my current skill set. I am simply not very proficient my 3D rigging skills to justify using this kind of process in my game, not to mention I have had multiple issues using the Unity engine as it is.
I am not the most adept when it comes to Blender and I recognise that my lack of experience limits the things I can use the program for at this point in time. I am sure there are many ways to add textures and brushes that I can use to my liking, by the use of plugins if I were to guess. Where I stand now though is that I do not want to take a risk and potentially sacrifice the quality of my final project by taking a gamble.
I still think this project was very insightful as it helped me solidify the path I feel the most comfortable taking, that being a 2D game though I think I would definitely like to revisit photogrammetry in my future independent projects.
Live2D Study & Experiments
About Live2D
The term "Live2D" refers to an animation technique where a static image is animated by separating it into multiple parts. Each part is animated on its own by meshing each image along a respective parameter and put together to form a complete animation without the use of frame-by-frame animation. Live2D is able to achieve a 2.5D look with its 2D models whilst preserving the original illustration despite not utilising any 3D aspects.
The most popular program for Live2D animation is "Live2D Cubism Editor" owned by a Japanese company named "Live2D inc.". The company was founded in 2008 by CEO Tetsuya Nakashiro and was originally named "Cybernoids Co. Ltd.".
The first product released by this company was titled "Live2D Vector". It was announced in 2008 along with the launch of its beta period before officially releasing in 2009. It was the first program which allowed creators to utilise Live2D technology to develop their own animations.
In 2010, the company worked on adding compatibility for platforms such as PSP, IOS and Android which allowed the first mobile game ever to use Live2D animation to release, a game titled "Barcode KANOJO" by CYBIRD CO., Ltd.
By 2010, the company began working on a new product to expand the use of their Live2D technology. This is when they started developing the beta for the "Live2D Cubism Editor".
As the company grew bigger, they began extending the amount of platforms and programs that could utilise their technology. For instance, in 2012, the Nintendo 3DS, PS3, PS Vita and Unity had gained compatibility with the Live2D Cubism Editor beta.
The 1.0 version of the Live2D Cubism Editor officially released in 2013 and the program has consistently been updated to this very day, bringing us to the current 5.2 version.
Why use Live2D?
The most popular use of Live2D is the creation of "Live2D models". These models can be used in a variety of different ways, however, a popular usage has been to turn them into "avatars", otherwise known as "Vtuber models".
The term "Vtuber" refers to a "Virtual Youtuber" which is generally used to describe a content creator that utilises a character model instead of their face within the content they produce.
These models are rigged with the intent of capturing your facial movement and are able to utilise facial tracking technology when combined with third-party software.
The most popular third-party software to achieve seamless tracking is titled "Vtube Studio" and is used by a variety of creators.
(The GIF below belongs to a Vtuber called "Ironmouse", a popular content creator. She is partially known for her extensive library of beautiful Live2D models that she has commissioned from both artists and Live2D riggers over the years.)
Vtube Studio, in simple terms, bridges the gap between Live2D and the user and has many options within its settings for the kind of facial tracking that best suits your needs. The minimum piece of equipment required for the face tracking to work is a webcam.
You are even able to connect your iPhone to the Vtube Studio program as the recent IOS face tracking technology is known to achieve the most accurate results.
The example below features their use of newly added Nvidia tracking feature which hopes to achieve a similar precision to the iPhone tracking quality with just the use of a webcam.
Of course, these models can also simply be used to animating whichever animation scene you desire!
I believe that I can potentially utilise the Live2D animation technique to create bubbly animations for my characters with my game project, whether it's within a cutscene or even animated sprites. I may even be able to take advantage of the face tracking feature to cutdown on production time.
How does Live2D Animation Work?
In order to make a Live2D model, or simply just a regular animation, the original illustration you are aiming to animate needs to be separated accordingly.
When making the illustration, you need to draw it with this in mind, as you will need to draw parts that you typically wouldn't have to worry about in a static image.
For instance, you would need to draw the area beneath the hair, separate each hair lock into its own entity and make sure you cover areas so that there isn't any awkward gaps during movement. If you want the eyes to move and blink, you'd have to draw each section of the eye separately (e.g. each eyelash, the sclera, the pupil, the eyeshine, etc) so that you can clip and layer each layer accordingly when animating.
Once you have separated every layer, you would need to name each individual layer so that you know which part is what. It is recommended to use groups as there can be an alarming amount of layers to work with. Here is an example of named layers:
You would then save this illustration as a PSD file so that each layer is saved separately. This PSD file would then be uploaded to the Live2D Cubism Editor program.
Model Experimentation
I opted to start experimenting with Live2D animation by practicing on a professionally produced model targeted towards Live2D students.
I browsed for free-to-use model PSDs on a website titled "Booth", a Japanese website that is commonly used for Vtuber and art related activities. The website itself was designed so that the concept of indie artists and creators alike being able to sell their creations was easier and more accessible.
The PSD I chose to work with was the one featured in the screenshot below, distributed by Nokoyaworks.
I decided to go with this one as the artwork itself is nothing short from polished and fits the anime-like style I wish to go forward with within my own work in the future. Not just that, each layer was separated nicely so that the rigging stage was able to go as smoothly as possible.
Once the PSD is imported into Live2D Cubism, it is assigned a simple art mesh.
Art mesh - An art mesh is a set polygons that will change the appearance of the original texture they were assigned to if edited. They are composed of small vertices and edges which all connect to each other.
This is why there is a feature to generate an automatic mesh, one with far more polygons.
There are three presets you can choose from, these being standard, deformation (little) and deformation (heavy).
Their names are pretty intuitive, the standard mode offers you a standard mesh, not too basic yet not too complicated. Deformation (little) offers you a simple mesh while the heavy option has a vastly larger number of polygons to work with.
I chose to work with a standard mesh just to start with. Later on, you are able to select particular sections of your illustration to increase the number of polygons further by using this feature again. Though it is worth noting that you are able to make the mesh from scratch if you wish to do so
Clipping Mask - Think of when you have to use a clipping mask within a regular illustration program (e.g. Procreate). You are able to make it so whatever is drawn on a specific layer is only visible within the existing layer underneath it, making anything that doesn't fit within the content of that lower layer no longer visible.
Here is another sample example image provided by the Live2D Cubism Editor team. As you can see, the eye's iris layer overlaps the sclera layer, the top bit sticking out.
This is why every layer is automatically assigned a ID (identification) within Live2D. As you can see within the previous image, the sclera was assigned the following ID - "ArtMesh36".
You are able to select a layer to check what ID it was given. Here is an example from my own model file:
From here on out, the rest is quite straight-forward! You can use this layer's ID onto another by placing onto the "Clipping ID" section of the other respective layer.
This simple action makes it so whatever layer you need is clipped without hassle. See the example below:
Deformers - These allow us to speed up the process of Live2d animation. Instead of moving each vertex individually, you can are able to create a deformer that allows you edit the multiple mesh vertices inside it at the same time.
There are different types of deformers. I will briefly go over two crucial deformers all models tend to use. These are "warp deformers" and "rotation deformers".
Any vertices belonging to an object inside a warp deformer can be moved as long as the object is inside it. It also allows you to connect multiple objects with one another so they move in unison, which is a technique used to achieve better face angles or a flowy swaging movement of an object.
A rotation deformer, on the other hand, is used to rotate an object. You may specify the angle you wish it to rotate to or move it free-hand. Like the warp deformer, its opacity and size can be adjusted to your liking.
Parent-child hierarchy - This is the system the program used to keep track of the relationship between the deformers and the objects inside them. The basic explanation is that by deforming the parent deformer, the child deformer will follow suit. However, deforming the child deformer will not affect the parent whatsoever.
Here is a visual example taken from the Live2D manual guide:
(A = Parent deformer, B = Child deformer.)
Deforming the child deformer
Deforming the parent deformer
Sometimes, messy parent-child hierarchies make it so you are unable to create a new deformer as the program requires you to create an object with the same parent deformer selected.
Parameters - These are settings that save and express the specific movement created by you altering the art meshes. Keyframes, essentially.
Here is an example of the "Eye X" parameter I set up within my model file. As you can see, as I move along the range of the parameter, all of the eye related meshes move according along the X axis.
To add these keyframes, you need to select your desired art mesh and move along to this parameter menu. For instance, you can then use the highlighted button that automatically add three keyframes to your parameter.
The button to its left adds two keyframes rather than three, which is necessary for specific movements.
You may add individual keys to each parameter depending on the purpose they serve, deleting keys is possible as well by using this tiny pop-up panel.
It is possible to link parameters together, a useful tool when considering animating sections such as your character's eyeballs.
Here is a sample I have from when I linked the "Eye X" and "Eye Y" parameters together. This allowed me to better visualise and adjust the movement of the related meshes.
However, before getting to this stage, there is a setting you need to run. You see, by linking these parameters together, you need to keep in mind that before doing so, "Eye X" for instance didn't account for any movement related to the Y axis.
This causes there to be gaps within the rigging between the coordinates separating the two axis together. To get the eyes to move as they should like the previous example, you must use the "synthesize corners" feature.
Eye Close Movement [ + Physics Parameters]
Within Live2D Cubism, there is a designated "physics tab" which allows you to add flowy, swaying movement to your model whenever it moves in real time. This movement is controlled by the settings and inputs connected to their pendulum.
In order to use this feature, you need to set up special parameters that account for these movements. These parameters need to have the "Blend Shape" setting activated.
This feature bridges the gap between the differences to the model geometry and the altered parameters. It results in flowy movement without worrying about multiplying parameters.
Here is a visual demonstration of how I deformed the eye lash art meshes to create flowy lash physics that are triggered whenever the eye closes.
For both the iris and pupil, I utilised one of the twelve principles of animation I have learnt during my first year on this course. This refers to the use of squash and stretch to give the eyes a bouncy movement whenever the eye closes.
You can refer to this example below to see how I deformed the art meshes in each parameter to reflect this effect in action.
As this is one of my first times rigging a model within Live2D Cubism, I opted to not kindle too much with the pendulum settings of the physics tab, preferring to rely on the provided defaults instead.
These can be visualised within the pendulum preview section of the physics tab. As you can see, here you can tinker with the duration, shaking influence, reaction time and convergence of the pendulum.
The only sections I tinkered with significantly were these output settings belonging to the physics parameters I set up earlier. This is so that I could control how much the default pendulum settings would affect the output of these physics.
This is the outcome of the "Eye Close" movement after this set up. As you can see, when it detects that the eye is closing and opening, it triggers the relevant physic parameters.
All of this can be applied for the rest of the physics, really! Here is the outcome of applying the same logic and practices to add physics to the movement of the eyeball as it moves along the X and Y axes.
Eye XY Movement [ + Physics Parameters]
I have already went over as to how you deform and save the movement of an eye moving along the X and Y axes, however, I achieved it by rigging only of the eyes to start with and then copy-pasting the movement onto the other side.
To start, make sure to delete all art meshes belonging to the eye you have not rigged.
The rest is possible by selecting the deformer and related eye art meshes that you would like the other side to have.
You then have to hit [CTRL+C] and then [CTRL+V] on your keyboard to make a copy of these meshes. The next step is to select the new meshes, right-click with your mouse and hit the "motion mirroring" button.
Ensure that you select the option to reflect the "Eyeball X" parameters so that they move naturally and not in the opposite direction they are meant to.
Below is a visual example of the end result. As you can see, this even accounts for the physic parameters I set up during the previous stage, significantly cutting down the time needed to complete the eye rigging. Not just that, it guarantees symmetry of the physics and overall movement.
Reflection
I had an absolute blast with studying and attempting to put Live2D animation techniques into practice. It wasn't something I had considered using before in previous projects as it genuinely never crossed my mind, however, I am so pleased with how my experiment went.
I would say, however, I do have some doubts about my ability to create a high-quality rig, especially when it comes to actually drawing the illustration of the model myself. I believe this is mostly due to my lack of experience in this area. Despite knowing it is something I can consider myself to be passionate about, there is still a clear skill gap in both areas that I need to fill as I progress further.
It is within my own experience that this is a technique that does require a lot of trial and error as every model differs from another in some shape or form, especially when considering different art styles or layer-cutting abilities. Even an experienced rigger* can struggle with animating parts of a model that aren't cut properly for the movement they are trying to achieve.
(note: the term "rigger" in this context is used to describe someone that does Live2D animation, not to be confused with the term used within stage production.)*
This means that being being an experienced digital artist in general isn't quite enough to produce a high quality model, regardless of how objectively good their artworks are. It is important to study proper layer-cutting techniques so that your PSD file accounts for any hidden parts the animation will require, not to mention so that there is enough separation to ensure movement can be as fluid as possible.
Of course, there can be such as a thing as over-cutting your layers and you must take file size into account so that the model isn't too heavy to run on the client's computer specs or preferred software. And of course, there is also just plain-old bad layer-cutting that doesn't make sense and can't be used at all when animating.
Fortunately, this is a medium that is widely documented on the internet, new guides and tutorials are released practically every single day by both independent and corporate creators that wish to share their knowledge with upcoming artists and riggers.
I am positive when I say that this is something I can realistically see myself using in the future, including my current ongoing project.





































Sem comentários:
Enviar um comentário