I went to the Sharcnet GPU and Cell Computing Symposium at my university last week. One of the keynote speakers, Dr. Paul Woodward, pretty much blew my weak and feeble mind. He used Roadrunner, a.k.a. the faster supercomputer in the world, to compute some impressive fluid dynamics. You know your research group is on the right track when you have a Wallpapers section on your web site.
Back on planet earth, us computational plebs don’t have such luxurious resources at our finger tips. To start with, you don’t have a lot of options when you feel like simulating fluids on a computer. Things get even more depressing when you want fluid flow computed in real time. In video games, or maybe just in old video games, accurate water simulation ends up being avoided all together with some elaborate ruse to keep computation costs down. Why do fluids have such a bad rep? To paraphrase Wikipedia: “We suck at solving systems of non-linear partial differential equations, here’s a million dollars if you figure it out, good luck lol”.
I find this whole water simulation extra-depressing because the universe computes the solution to these problems instantly and literally rubs it in our face whenever the wind blows (via turbulence). Just imagine with me what life would be like if we had an exact solution for fluid flow. Picture weather forecasts that actually forecast, reliable message-in-a-bottle delivery using ocean currents, and aerodynamic everything.
Wow, nice introduction. This post is actually about my fluid simulation project for Google Summer of Code. To achieve real-time fluid simulation in Step, I intend to follow the Smoothed Particle Hydrodynamics (SPH) algorithm outlined by Muller et al. in “Particle-Based Fluid Simulation for Interactive Applications“. This method works by breaking up a large fluid body into discrete chunks and using these chunks to create smoothed out fields. In fluids, the stuff we care about are pressure, density, and velocity fields. SPH is just a fancy way of figuring out the forces that act on your fluid by taking these fields into account.
Without going into all the mathematical detail presented by Muller et al., we can begin to define our fluid particles class in StepCore, the mathematical back-end of Step. Note that all of this is pretty fuzzy right now since I haven’t started coding anything. Many important properties of Fluidhttp://web.archive.org/web/20150910020932/http://stepcore.sourceforge.net/docs/design_intro.htmlParticle already exist the base class Particle in Step as depicted below. Nothing too special here, just get and set methods for the particle member variables. One thing to note is that FluidParticle has a specific radius that defines the core radius of each particle. However, at this point it is unlikely that the radius of each particle will vary. I’ll talk about Errors in a separate post also.
Now that the FluidParticle is defined we can look at the collective group, the Fluid. All “bodies and forces” in Step are derived from the root class Item. A Fluid is simply a group of FluidParticle items in a handy “Item Group”. The main purpose of this group is to extract macroscopic features of our fluid particles that exists within a rectangle drawn in the GUI.
This rectangular data inquiry box was designed for use with the Gas class where determining the number of particles, the kinetic energy, and the temperature are meaningful quantities for statistical mechanics. However, I’m not really sure how well the physics of statistical mechanics translates to the mathematics of a fluid. For instance, I may know the velocities of all my fluid chunks but is it correct to use these velocities to determine the so-called “temperature” of my fluid? I suppose so… but I have to do a bit more research.
Nonetheless, for now I’ll just use the rectPressure(), rectVelocity() and rectDensity(). I’d like to be able to use these methods to not just calculate the average pressure, velocity, and density by looping over the values stored in each fluid chunk but averaging over the entire field created by the fluid chunks. I’m not sure how computationally demanding this calculation will be, but if I discretized space within the fluid into some sort of grid this may be possible.
So now we have a Fluid that is composed of individual FluidParticles. Where does the physics come in? We need to look at the FluidForce class for that. The purpose of the FluidForce class is to do the work of applying the inter-fluid forces to each particle in our item group by means of the calcForce method that allows you to calculate the variances based on a boolean flag. In the process of calculating the forces on each fluid particle, I’ll have to use the helper method calcPressureDensity() as well.
You’ll notice that the only member variable for the fluidForce is a cutoff value and FluidErrors (which I will elaborate on later). This value represents a limiting distance for which the force contribution from a distant particle would be so small that it’s not worth calculating. In the current Gas implementation in Step, calcForce requires O(n^2) comparisons, as we have to loop through all pairs of particles to determine if they have a distance less than the cutoff value. As mentioned briefly in Muller et al., a potential speed-up over this approach might be to place our fluid particles on a grid and only calculate the force contribution from particles attached to neighboring grid cells to our particle of interest. I’ll address performance issues later in my project.
I’ve already overlooked a few critical issues that may require the modification of these classes. A fundamental characteristic of a fluid is viscosity, which is defined as resistance of a fluid to shear and stress forces. Obviously, my GSOC project will not be complete unless I can simulate some delicious sticky honey. At this stage, for simplicity, I’ll probably just hard code some viscosity values. The second critical issue will be collisions. My mentor and I both agree that collision handling may get tricky. If you’ve ever been stuck in a wall or a tree when playing a video game you’ll know what I mean.
I’ll address that problem eventually but next I’ll briefly discuss the Error classes associated with the fluid objects. The calculation of errors/variances is one of the cool features that makes Step unique!
After that, I’ll explore the Qt and GUI side of Step. I’ll connect the classes I’ve defined above to user interactions. Once I do that, things should get a lot clearer in terms of how the FluidForce, Fluid and FluidParticles work in conjunction with the World.
Maybe you know that nowadays all the endless runner games like Zombie Tsunami, Temple Run or Run 3 are really attractive because of highly addictive gameplay and challenges. In this article, I will guide you how to simply make an Endless Runner 2D by Unity3D software. (There is no requirement of knowledge about programming). For game “Run3”: This is a 3D game that requires understanding about advanced programming, so I will not mention it in guideline. I will perform separated articles talking about 3D game next time.
(Photo: game zombie tsunami)
Part 1: Preparation
Firstly, you dowload and install unity software by the following link: https://store.unity.com/
Just choose a free version, we can use most essential functions to make game
After installing and restarting software, window frame and load projects will appear
Next step, click the On Disk button (to be able to work offline and not need internet connection). Then click the New button to create the project.
After that, wait a minute, working screen will appear
Here you maybe a little bit confused, look at this photo to easily understand
Next, learn about some basic tools for customizing an object.
In left-to-right order: change the place to look – move objects – rotate objects – customize the object’s elasticity, size – customize elasticity of objects.
Several terms will be used in the article and notes
Object (now called Object or obj): Something in the game. For example, characters, sun, cat, wallpaper.
Sprite: just a picture no more no less. When the sprite is attached to the obj, the obj will take the shape of the sprite. It can be regarded as the appearance of obj. Just like a person’s “skin”.
Script: file code. If the sprite is considered as skin, the script is the brain. All things we programming for objects like hitting enemy, touching the screen, jumping, etc are in here.
Component: Things making obj as sprite, script are all components. It is possible to temporarily call a component as “part”.
*Note: – save when leaving unity – during making games, when getting problems, you can refer to my project:
You should dowload these pictures, because the article does not include how to draw characters or landscape, so I already prepared essential pictures (sprite)
After downloading, extract files and the only one we need is png file, the rest you can delete.
Next, go to Unity3d and create the following folders by right-clicking in the Assets pane (if you are wondering the place of assets pane, please look at the upper left corner of each frame to find the name) and choose create -> folder:
Sprite
Script
Scene
Animation
Next, enter the png file you just downloaded into unity by: Opening the file browser in thumbnail mode and dragging the png file into the sprite folder in unity
Open the Sprite folder-> png-> Tile. Here you will see the pictures of the ground. Drag and drop the images into the left Scene pane and use custom tools to create your desired terrain.
Now, press any block at Hierarchy frame and then look at Inspector frame. If you see 2 components like me, you are doing correctly.
Let’s talk a bit about component trasnform.
Position: position of the obj in the scene
Rotation: Rotation of obj.
Scale: Elasticity of obj.
In addition to using tools that I have mentioned above, you can completely customize the above parameters to create the desired terrain.
Next, go to the png folder, drag and drop BG (background) to the scene frame. In the component sprite renderer of BG, find the order in layer and set it to -10. You can imagine that this is the display priority of the game, when an obj has a large number of order in the layer will be displayed first and over the obj having a small one. Because our BG needs to be background, so it must be smaller number of order in layer than the other objects.
Then, select create empty in the scene frame and named matdat, then drag all the ground obj to matdat. Now they already become sub object of matdat. This means, when you move matdat, all obj will move as well.
Now you select all ground objects. Press component->physic2D->Polygon Collider 2D. This is to create frame in order that objects can stand on the ground without being went through.
Part 3: Making characters
Do you see images from run (1) to run (8)? Select all of them by pressing Shift and clicking run (1) and run (8). Drag and drop and scene. A frame of animation will appear and you save them in the animation folder.
The character maybe quite large, please adjust accordingly. Here I choose the scale as (0.3,0.3,1)
Similar to the above ground, add the component box collider2D to the character.
Next, add the Rigidbody2d component to the character. This component causes the character to suffer the effect of gravity.
Remember to rename the character to easily manage, preferably run (1) should be changed into character.
Now, press play and enjoy your achievement. The next part will be more difficult
Part 4: Writing Script
In this part, I will guide you how to write scripts. If you have a little knowledge of programming, it will be easy to understand, can even change the source code, and otherwise follow instruction.
In the Script folder, right-click and select create-> C # Script and name that file as character.
Then double-click on the file and wait for a while, the script editor will pop up. (For some devices, the script editor has a different interface, but it’s fine.)
Now, write like me // all sentences written in this form are interpreted as explanations, the computer will not read
C#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
usingSystem.Collections;
usingSystem.Collections.Generic;
usingUnityEngine;
publicclasscharacter:MonoBehaviour{
publicfloatspeed=4f;// Declare speed
Rigidbody2Dmybody;
// The start function is only run once at the start of the game
// The update function is run continuously in each frame
voidUpdate(){
mybody.velocity=newVector2(speed,mybody.velocity.y);// Create a thrust
}
}
Now press Ctrl + Shift + S to save the file. You go back to unity, drag the script character file into the obj character (or click add component in the inspector frame and type “character” in the search box).
Next, look at component rigidbody2d, press constraints and tick and z box (so that the characters are not rotated if the force is applied.)Now press Ctrl + Shift + S to save the file. You go back to unity, drag the script character file into the obj character (or click add component in the inspector frame and type “character” in the search box).
Next, look at component rigidbody2d, press constraints and tick and z box (so that the characters are not rotated if the force is applied.)
Press play and enjoy your achievement
Now, back to the script character file, we will write a script that allows jump when the player touches the screen.
C#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
usingSystem.Collections;
usingSystem.Collections.Generic;
usingUnityEngine;
publicclasscharacter:MonoBehaviour{
publicfloatspeed=4f;// Declare speed
publicfloatjump=7f;// Declare jumping height
Rigidbody2Dmybody;
// Use this for initialization
voidStart(){
mybody=GetComponent<Rigidbody2D>();// Get component rigidbody2d
}
// Update is called once per frame
voidUpdate(){
mybody.velocity=newVector2(speed,mybody.velocity.y);// Create a thrust
if(Input.GetMouseButtonDown(0)){// If the player touch ...
mybody.velocity=newVector2(mybody.velocity.x,jump);//... Create a thrust to jump up }
}
}
Save and then play to test
Finally, you drag 2 main camera and BG obj to character obj to create sub object. This allows the character to move, BG and main camera move as well.
In the Assets file, create the material folder. Select create-> Physic materia 2D. Adjust both of the material’s parameters to 0.
Select all the ground objects, drag and drop the new material into material frame of collider2D.
Part 5: Writing endless runner script
Drag any sprite file into the scene frame (mine is StoneBlock), set a large scale x (about 1000). Drag the obj to the right edge of the camera. Do the same with the lower border. (These 2 obj play the role as a limited line of the terrain.)
Add polygon collider2D to 2 above objects.
Let’s make the terrain longer by adding ground obj and, add component polygon collider2D, add material, set it as sub obj of ground. In addition, you can also use other obj in the object folder to decorate (this decoration obj does not add the component polygon collider2D in order to the character can go through, but still be sub object of matdat).
In order in layer of character obj, change to 10.
Please click create empty, named it as checker1, then add component box collider2D, tick into is trigger. Edit it in the right border of the camera or BG like in the above photo. Obj is set as a sub obj of character
Look at the inspector, click on the tag, select add tag, click +, name the tag as the checker. Click on checker1 obj, and then add tag for this obj as checker.
Create another obj and name it as checker2. Set as a sub object of matdat. Continue to add box collider2D, tick into is trigger. Place this obj at the bottom of terrain.
The idea is that if two checkers go through each other, they will create a matdat object clone just behind matdat obj. By that way, the character will run endlessly (because terrain is constantly being created.)
Now create a script file named matdatspawn in the script folder. Open it up and coppy this code.
C#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
usingSystem.Collections;
usingSystem.Collections.Generic;
usingUnityEngine;
publicclassmatdatspawn:MonoBehaviour{
publicfloatkhoangcachx;// Create distance variable x
publicGameObjectmatdat;// Declare an obj named ground
The Instance statement means creating a clone obj with the following parameters: (true obj, position, rotation).
The destroy statement means that after a certain period of time (10s), obj (matdat) is destroyed.
Now save it and add script just created to checker2 obj in unity.
Right click on obj ground, select duplicate. Move matdat obj(1) horizontally to the spot that you want the obj clone will be created there. Then do a calculation: get the position x of matdat (1) minus the position x of matdat. Then, remember that result (in my case the result is 83.9).
Now remove matdat obj (1).
Continue to remove the 2 obj we have created above.
Return to obj checker2. Look at the component matdatspawn (script). Enter the result you just calculated in the box distancex.
For Matdat, you drag and drop matdat obj into it.
Finally, choose play and experience game
Part 6: Fixing some errors.
Error 1
When our character stops due to being trapped by blocked stones, the camera stops, I do not want this situation. So I’ll fix it.
First, you pull the main camera away from sub obj of character.
Next, drag 2 BG obj and checker1 into sub obj of main camera.
In the camera inspector, add 2 components including character script and Rigidbody2D.
In the jump frame, enter 0 digits. In Rigidbody2D, select kinematic in body type.
And then you drag BG objects, character, and main camera to middle of the screen (in order left to right).
Error 2
Character can still jump in the air.
First, create a tag named ground
Attach the newly created tag to all ground objects.
There are two main types of numerical errors. The first of which is result of computers having a limited number of bits (32 or 64) to represent numbers. All the coolest real numbers have an infinite number of digits so we’re restricted to representing numbers that differ by machine epsilon.
The second source of numerical error pops up a lot in numerical algorithms, including those implemented in Step for solving differential equations. It would be nice if we could perform exact symbolic computation all the time, but physics can get messy, especially with errors! Almost any time we take a derivative on computer, we approximate it using a taylor series. This approximation works great if we keep an infinite number of terms but who has the time to calculate all that? Thus, our results differ from the exact answer based on where we truncate our taylor series. This truncation error combined with the floating point error mentioned above, has the potential to cause serious numerical instabilities and pain for computational scientists.
Numerical issues aside*, in true experimentalist fashion, Step allows users keep track of the propagation of user-inputted uncertainties over the course of a simulation. The mathematics behind this process is reviewed briefly here. As an example let’s look at the calculation for the uncertainty in kinetic energy in the Step’s ParticleErrors class,
The kinetic energy (KE) of a particle is a function of both mass and velocity, where each of these variables have their respective uncertainties. In order to calculate the variance of kinetic energy we have to take the sum of d(KE)/dv multiplied by our velocityVariance and d(KE)/dm multiplied by our massVariance. The function only looks a bit confusing because we want a scalar value and we have to use a few Eigen functions to deal with 2D vectors.
The trouble with smoothed particle hydrodynamics is that each “fluid particle” is inherently an approximation of many particles or a bulk region of fluid. Nonetheless, the calculated densities and pressures for each chunk of fluid are subject to uncertainty. As outlined by Muller et al., any scalar quantity A can be calculated by summing over all particles j with some smoothing kernel defined by W:
Given the user-inputted uncertainty in particle mass m and whatever calculated uncertainty we have for our quantites A and p, this formula can be used with the method above to calculate uncertainty in the newly calculated quantity A. These calculations will be coded in the FluidParticleError class outlined below. Note the addition of the member variables for densityVariance and pressureVariance. These values could be calculated on the fly, but in doing there would be a large amount of calculations due to the summation of all nearby particle errors. Later in this project this will also include Viscosity error calculations that depend on the velocity of nearby particles using the same equation above.
It’s a week into my GSOC project and I still have lots of work to do. Next post will be a bit more software development oriented, as I’ll look more at the Qt GUI and how it connects with the numerical back-end of Step. Expect a post shortly!
*You can adjust the precision of your Solver in the properties dialog box of Step!
I’ve been working hard on two programming projects this summer, namely my Google Summer of Code project and a fancy upgrade to some molecular dynamics software. In my previous posts, I looked at the back-end of Step and some of the mathematics of smoothed particle hydrodynamics. There are still plenty of outstanding problems in those areas but I’ll address those gradually in the coming weeks. For this post, I’ll give an overview of all the GUI/user interactivity stuff that i’ll have to tackle on my quest to implement fast fluid simulation.
I’ll just enumerate the ways the user can interact with a fluid and trace through exactly what is going on behind the scenes. I know, KDevelop/IDEs are pretty fancy, but I’m oldschool so I tend to just follow the code execution manually. It’s kind of like a grep-based treasure hunt! So, I’ll just give some running commentary with bonus screenshots from the current Gas classes. I doubt many readers of this blog will find this interesting, but my dream is that this post series may someday be useful for a new Step developer =P
Oh, if you’re a Qt newbie just remember that any class with a Q is a Qt class, and don’t forget your slots and signals.
User clicks the fluid object in the item palette and adds it to the Scene.
When any action, such as clicking on a button, is performed by the user, a QAction:triggered signal is emitted by that object. Within the itemPalette class (as pictured graphically above) we deal with this signal by linking it up to an actionTriggered slot.
This slot then passes over the responsibility of creating a new object from the itemPalette to the WorldScene’s beginAddItem method. The WorldScene handles all things graphical in Step. It inherits QGraphicsScene and provides us with a safe place for visualizing our 2D objects. Some day, when a brave Summer of Code student steps up to the challenge, Step will evolve to 3D and leave QGraphicsScene in the dust.
WorldScene then initializes a new ItemCreator. You can only add one item at a time to the WorldScene, so it checks to see if an itemCreator has already been defined, and if so it deletes it, while emitting an endAddItem signal. Anyway, since we don’t actually know the type of the item being created, we have to pass the item over to the worldFactory class to query our object. Once we know more about the object we call the specific newItemCreator class of that object, that in my case will be FluidCreator. It’s worth mentioning that every derived class form Item has it’s own Creator, GraphicsItem, and MenuHandler classes. I’ll go over these as they come up.
FluidCreator will be a class with the sole purpose of adjusting the initial state of the fluid before it gets placed on the WorldScene. It’s definition should look something like this:
At this point the user gets a friendly notification to click somewhere to position the start of the fluid. In the Phun physics sandbox you can “liquify” any object to create a fluid. This can be pretty entertaining, but when it comes building a precise simulation, a boring rectangle is best suited!
Any WorldScene clicks get passed to the fluidCreator sceneEvent until the item creation is flagged as being completed. Three different mouse events are handled by the fluidCreator scene event.
The user clicks the mouse. This creates a fluid object and a fluidForce object that are added to the World. This case also starts a “Macro” for undo purposes later on.
The user drags the mouse. This updates the measureRectSize and measureRectCenter fluid attributes.
The user releases the mouse. This finalizes the rectangle information. If the user did not drag the mouse it would create a default rectangle size. Releasing the mouse prompts the user to input detailed attributes for the fluid. For early development all values should be fixed as only certain values may permit stable fluids. Once this information is added, the fluid particles will be created within the fluid and the Undo macro is ended. A nice pop-up confirming the creation of a new fluid will appear and we are left with a rectangle filled with non overlapping fluid particles.
On top of all that, when the user releases the mouse, a menu handler is created. This menuHandler takes care of both the context menu and the creation of new fluid particles. By itself, a fluid is just a framework for keeping track of fluid particles. Separating the creation of fluid particles from creation of a fluid allows the user to re-run the particle creation method until a desired smoothness is achieved. MenuHandler also kicks off a critical chain reaction by adding a FluidParticleList to the WorldModel.
The WorldModel is really the central nervous system of Step. All solvers, constraints, collisions, and general mayhem occurs within this monster of a class. I won’t unnecessarily spill the guts of Step into this post, but suffice to say, things get quite elaborate at this point.
To quote my mentor:
In WorldScene::worldRowsInserted signal handler, which is called when new item is added to the World, the corresponding graphics item gets created and initialized. In WorldScene::worldDataChanged slot, which is called when some (or all) items in the World changes their state, it calls WorldGraphicsItem::worldDataChanged method for each graphics item on the scene so that each item can redraw itself
The question I have to ask is, how should one draw a fluid Particle? I will have to do some hands on experimentation with this later, but it is important that fluid particle spheres overlap to give an illusion of a continuos fluid. The actual drawing of the fluid particle object is coded into the GraphicsItem::paint class.
Users selects a fluid particle
Selecting or hovering over a fluid particle is considered a “stateChange” event and is handled depending on the object. User interactivity with the fluid may be an interesting task, especially in real-time while the simulation is running. However, clicking and dragging a fluid particle should be entirely possible even when pushing against other fluid particles. As with a Gas, I feel that a fluid particle should display a velocity vector (as pictured below). This velocityHandler is created using the Vector2D velocity quantity of our fluid as follows:
_velocityHandler = new ArrowHandlerGraphicsItem(item, worldModel, this, _item->metaObject()->property(“velocity”));
In particular, ArrowHandlerGraphicsItem is described in detail in the WorldGraphics class. One idea I may be interested in implementing in the future, is adding support for displaying “velocity field lines” to show the instantaneous velocity at a fixed grid of points throughout the fluid. Other cool graphical features might be a rainbow color mapping to identify the areas of high pressure/density.
User presses simulate.
The actual simulation process, in order words, the integration algorithm of all our forces in order to go forward one step in time, is briefly outlined in the Introduction to StepCore document here (http://stepcore.sourceforge.net/docs/design_intro.html). But what end user in their right mind is interested in such things?
End users can keep track of two types of measurements for a fluid.
Exact properties of a selected “fluid particle”.
Average observables of our system like average velocities, densities and pressures using a rectangular selection.
Where the latter would be the most accurate data to be obtained from the fluid. The difficulty with the first measurement is that fluid particles are an arbitrary representation of a bulk amount of fluid. Nonetheless, for either measurement, the properties browser gets updated with data in a similar way. PropertiesBrowser has slots to catch when an object gets selected or when data changes in the world. Then it updates the browser fields accordingly.
A difficult aspect of simulation will surely be the collision of fluid particles with other objects. In that case, it is imperative that the effective radius of a fluid particle avoid overlapping excessively with other objects. Perhaps by tweaking the smoothing kernel, no overlap will actually occur, but I must pay close attention that the fluid particles don’t appear penetrate though an object when in fact the core particle is somewhere safe. This is a case when dynamic graphical scaling of the radius of a particle might be beneficial.
User deletes the object.
Deleting a fluid is a bit less complex than creating a new fluid and fluid particles from scratch. It’s more or less just a matter of getting your book-keeping straight. Any delete event, for instance, selecting an object and pressing the delete key, will be caught and passed on to the worldModel for annilhation.
That’s just about all the interactions the user has with the fluid. Phew, talk about exhaustive. Next post, hopefully on Thursday or Friday, I’ll touch on some collision detection issues. That’ll be the final topic before I dive into a glorious ocean of code.
GSOC has finally come to a close. It seems like yesterday when I was blogging on here, young and foolish about the complexity of my project. What you see above are my modest results. Pretty as it might look, this fluid is most definitely code-name O(n^2), and that’s not really a good thing! Let me explain the name in the three major aspects of my project.
Realism
It’s one thing to implement a smoothed particle hydrodynamics paper in STEP, that was basically done by the midterm, but how can you make it… well, physical? In an SPH system, there are several parameters that must be adjusted to achieve that. Namely, the temperature dependent gas constant, the rest density, the smoothing kernel length, the surface tension coefficient, the viscosity constant, and the universal bounciness of object-object interactions. With the exception of the surface tension and bounciness, I was able to determine a number of these constants for a working fluid using trial and error.
Most significant for computation was the adjustment of the smoothing kernel length. This is the radius around each fluid particle at which the effects of the particle are spread out. According to the SPH algorithm I implemented, at each time step we compute the pressures at each fluid particle based on the density of all fluid particles within the smoothing kernel radius. That being said, at each time-step I made O(n^2) distance to fluid particle comparisons, and in the limit of an infinitely large smoothing kernel there would be O(n^2) arithmetic operations. Among some other issues I will discuss, I found this to be a limiting factor for the number of fluid particles I could use.
Unfortunately, this prevented me from testing an important aspect of fluids, surface tension. By adding some additional fake surface tension forces outlined in Muller 2003, my fluid would be calmer and perhaps even support the weight of a boat at it’s surface. Additionally, the addition of bounciness would allow gravity to act realistically on the fluid and have density collect at the bottom of a jar.
So, how can one improve on this brute-force enumeration of all pairs of particles? My expert mentor Vladimir suggested a new data structure might be the best approach. Since both fluids and gases benefit from a large number of particles, it would be advantageous to implement a Axis-Aligned Bounding Box data structure. Not only would this help for collision detection between any Step objects but this would provide a way to only calculate pressure forces for neighboring fluid particles. This works because we store a sorted list of boxes about both our X an Y axes and keep track of box overlaps. Unfortunately, my mentor and I did not have enough time to implement this complex rehaul to StepCore.
Measurement
Step is about education. For educational purposes, at the very least, it would be great to be able to know the average density and pressure within a region of the fluid. Despite my fluid being somewhat artificial and hacked together with strange internal values, I managed to achieve this. However, the computational complexity is, once again, not pretty.
The calculation of pressure at a particle point in the World was already used for the dynamics of the fluid. I merely had to generalize it for discrete area elements of Step’s “measurement rectangle”. By moving the rectangle and resizing it within the Fluid, each time-step I am sliced the area into a grid, calculated the pressure and density at the center of each area, and then added it all together.
The trouble with that is probably quite clear. If you have a very large measurement rectangle you may want a widely spaced grid and for a small measurement rectangle you may want a closely spaced grid. A grid dependent on the size of the rectangle and the smoothing kernel radius is also a difficult value to fine tune.
In the future, It would be ideal for the user to be able to select the precision of the pressure and density calculation at their own discretion, and have the variance values adjust accordingly. I was unable to implement variance calculations for these measurements, but that may also be an expensive calculation!
Visualization
Finally, the Pièce de résistance. The jaw-dropping visuals that all fluids deserve. As you can see above, there’s lots of room for improvement. Visualization can be done in two ways. Muller 2003, the paper I was following for this SPH implementation, suggests a fancy isosurface calculation. That is, determine the normal to the fluid particle density field at every point. If this value is greater than some threshold, it’s safe to say it’s a surface particle and it could be used to draw a sexy looking surface.
However, with a deadline looming, my mentor suggested a simpler approach. I would simply calculate density at each point in a grid and then paint a particle there with opacity based on the density. As seen above, these opacity calculations weren’t quite calibrated, since we don’t see much variation in blue except at the edges of the fluid.
I had a problem though. Where should I “draw the line” for drawing the dots? I can’t very well calculate the density at each point in the observable screen area. It would take forever to get a smooth fluid. The solution is to calculate the minimum bounding box for the fluid and use a cut-off value in-case one fluid particle flies away off the screen. This ended up being quite challenging for some reason, so I decided to only render the fluid within the “measurement rectangle”.
I had an issue with converting between coordinate systems that still remains unsolved. I needed to draw my fluid about the origin of the WorldScene or else my fluid would render in the incorrect location. This is likely due to a simple mapping problem that I intend to figure out soon.
End
So, what’s next for Fluids, Step, and Me? The realism, measurement, and visualization issues outlined above will continue to be worked on by Vladimir and I. After some of the fundamental problems are addressed, I forsee a very liquidy Step in the near future. Me? I’m starting a MSc. in Physics at University of Waterloo this Fall, but I had a lot of fun on this project and hope to continue making Step a richly featured open-source physics simulator!
Most of my blogging time is dedicated to my hit image blog Fresh Photons these days, but I recently came across this great image above on a blog called Awkward Stock Photos. Whimsically, I decided to see what other stock images were available under the tag “Experiment” and I basically laughed my way through 70 pages of results. To be honest, most of them aren’t so bad, but here are some of my personal favourites,
The copper sulfate guzzler,
The movie villain chemist,
90% of the pictures are of scientists squinting at beakers. Ah, so this is how one does science!,
Science is serious business,
Wait, what?
Oh, I guess it’s a standard chemical reaction,
Just an alien fetus,
It’s about time some scientist extracted the essence of love,
Love extraction failed,
On the contrary, this one is titled “successful experiment”. Great Job!,
Who let sexy tomb-raider into the lab?,
The ‘dot’ in Photos.com adds some extra flair to this one,
Good old topless chemistry,
At least wear a bra!,
Much better,
Girls Gone Wild in the lab y’all,
Lensless safety goggles are also all the rage, I hear,
Fun with needles,
No gloves, but at least he’s focussed on the task at hand,
“Ok, Now try being a more menacing scientist”,
This looks bad…,
Oh nevermind, it’s all good,
Blatant animal testing (via interrogation), “ANSWER US, DID YOU LAY THESE EGGS?!”,
Here’s a lady using a multimeter to clean the crap out of her keyboard,
Cheers to good science and a romance-free workplace,
There’s a great book called The Geek Atlas: 128 Places Where Science and Technology Come Alive, a list of nerdy tourist attractions around the globe. I heaven’t exactly read it, but when I left for Paris to do research September-November last year at the Synchrotron Soleil (pictured above) I made it my goal to explore as many science museums in Paris that I could. On top of that, I just got a new camera before I left, so let me break down my favorite spots in a nice visually stimulating manner.
Even if you’re a super science genius and you’re too cool for these layman museum displays, it’s really interesting to consider these exhibits from a science education perspective. While you’re strolling through, consider the challenge of designing of both accurate and interactive science demonstrations. I have a vested interest in these installations so I found this whole museum quest fascinating on a number of levels.
If you’re a biologist at heart, this place is great. Free for students (don’t forget to bring your student card overseas kids). Awesome animal parade centerpiece surrounded by multiple levels of taxidermy-style displays. Don’t miss the tiger attack and dodo not pictured. Great for photography opportunities and if you’re visiting in the right season, there’s lots of stuff to do outside. A must-see in my opinion.
The most famous/biggest science museum in France, home of the Geode (an IMAX theatre),
I only visited the permanent exhibits (energy, space, optics, etc.) but those alone were worth the price of admission. Be warned that all the displays are explained in only French. If you’re a hopeless English-only speaker like myself, you’ll have to rely on your inner scientific explanatory gusto (or they might have tour guides). What I didn’t expect to see were all the cool mathematical demos. How do you present something as dry and uncool as math? The answer is geometry. Check out this simple cube/square projection and artsy Mobius strip,
I really enjoyed the simple soapy water demonstrations. It was just a demo of contorted wires being dipped in soapy water, demonstrating nature’s great ability to minimize surface area. Another bubble demonstrations was just a sheet which you could lower and raise into bubbly water and watch the swirling rainbow optical phenomena. Kids (including me) had fun blowing holes in it and recreating it,
Did I mention there were traditional science exhibits? There was a genetics section and several noteworthy physics demos scattered about. I had fun trying to photograph this Newton’s Cradle symmetrically. Plus, a snazzy rotating water wheel.
I’ll be the first to admit the name of this place is confusing. More than anything, it’s a museum of historical science and technology and hence, it’s probably the most boring of the museums for kids. Don’t go here if you’re expecting interactivity, unless you count pressing buttons to activate crazy mechanical dolls (I warned you!) or unless they have a cool temporary exhibit on. For photographers, lots of stuff behind glass, like this fax machine. I was really hoping to get a closer look at such a rare piece of equipment,
Numerous engineering feats were displayed, like a Moon Rover and a Cyclotron,
Despite its grandparent’esque historical nature, I had lots of fun nerding out over old science apparatus. It really made me wish that today’s lab equipment had more wood and brass. If I ever get rich, oh man, you won’t believe the oldschool lab I’m going to build. This place even had a supposed “Laboratory of Lavoisier”. I heard that guy was a total jerk!
There are some cool geometric models by Théodore Olivier,
The original Focault pendulum, formally installed at the Pantheon, was/is at this museum, but I think there was a recent scandal where the cable broke. When I was there it was going strong.
They had a sliced-in-half car but you couldn’t sit in it for a picture, shame. No visit to this museum would be complete without a shot of their beautifully suspended Ornithopter.
Science in an ancient French palace, now that’s what I’m talking about! When you walk in you’re greeted with an amazing open-space and ceiling. This place has a planetarium but I was nervous my French would be too fail-ridden to get my money’s worth. The place itself was slightly underwhelming for an English-speaker.
The real draw of this place seemed to be the hands-on science demonstrations for kids. Check out the mathematics room, chemistry demonstration, and unfortunately deserted electricity stage!
This giant T-rex bust would look great over my fireplace, but I’ll guess it makes sense in a museum too. This place also has a decent section on reproduction to school young ones about the birds and the bees.
Certainly not a science museum, but this modern art museum had it’s share of aesthetically pleasing geometric constructions, not to mention a few Picasso’s along the way. Here is Antoine Pevsner’s “Construction spatiale aux 3ème et 4ème dimensions”.
And some other optical modern art!
As a summary I’d definitely recommend making the big trip to Northern Paris for “Cité des Sciences et de l’Industrie” and a nice leisurely stroll through Musee d’Histoire Naturelle. I’d suggest that the other two are totally optional.
Other things to do include visiting the famous lab of Marie Curie at the “Curie Museum“. If you’re visiting Paris before April 3rd, 2011 you can also stop by at the Palace of Versailles for their “Science & Curiosities at the Court of Versailles” exhibit.
On top of your normal museum visitations, you can also go on a morbid scientist grave tour. Louis Pasteur is buried at the Pasteur Institute, the Curies and Paul Langevin (who’s work is most related to my research) are at the Pantheon, and numerous scientists are buried at the Père Lachaise and Montparnasse Cemeteries.
One of a kind scientific souvenirs, you’ll want to visit Les puces de Paris. A great place to find complete skeletons, old phrenological brain models, brass telescopes, and old chemistry bottles (have those been autoclaved?).
Lastly, not very sciencey, but the Hunting Museum (Maison de la chasse et de la nature) I totally recommend to any visitors of Paris. It’s pretty small, but it’s completely bizarre. Where else can you get a picture at the base of a narwhals tusk at a ceiling covered with goat antlers. I won’t spoil all the sights (read: the owl room), but here’s a few teaser photos I took,
So if you’re a scientist or science fan visiting Paris, let me know in the comments and I’ll answer everything you want to know.
Stock photography offers a candid look at the essence of any “keyword” through the eyes of the general public. Given the renewed interest in my original Awkward Science Stock Photography post, I decided to check if the essence of science has changed over the years. Are all scientists merely chemists, confounded by an endless spectrum of coloured liquids? Let’s test this hypothesis by piecing together the process of Science using stock photography from the search-term “scientist” alone.
Next, scientists obtain chemicals and set up their work area. If they’re a female scientist they often do this in a revealing or full-on nude way.
Female scientists should check their make-up and hair before things get too crazy. Before:
After:
The aim of a scientist is to observe a coloured liquid and learn from it, but before they truly observe a liquid they must first engage in pouring and mixing.
This one is actually pretty badass… great pour.
Mixing is also important and some scientists will opt for the force to mix their liquid.
It’s easy to become miserable after lots of pouring and mixing
A soothing bench nap supported by the comfort of chemical laden gloves or jagged electronics will help.
She’s disappointed because it’s clear. Failed experiment.
This man could not assemble the molecular model kit. Experiment failed.
Next, it’s finally time for the scientist to observe the liquid! Way too close though.
The color of said liquid may induce shock. This is when you know you’re doing high-quality Science.
To be fair, microscopes can also be shocking.
Now that the hard work is over, the scientist may enjoy the fruits of his or her labour.
Certain coloured liquids warrant a phone-call to the local newspaper for immediate publication in the science column.
Occasionally, the best fluids are injected into food to improve crop yields. How else can a scientist get paid?
Scientists may also need to expose mice or dogs to your fluids in order to make designer cosmetics.
At the end of the day, scientists coddle their prized liquids by snuggling and cradling them.
As a scientist, I can verify this sequence of events resulting in Science is more or less correct. Thank you stock photographers!
What’s the best way to bypass expensive conference fees as an undergrad? Volunteering of course!
This year I’m volunteering at the ISMB 2008 conference in Toronto, which, for a minor time commitment, entitles me to hear fascinating presentations on all aspects of computational biology and bioinformatics. One of these was a tutorial on ncRNA (noncoding RNA) gene finding, given by Jan Gorodkin and Ivo Hofacker.
First, a quick intro. Most of what we know about the genome deals with coding sequence:
DNA that’s transcribed into messenger RNA (mRNA)
mRNA that’s translated into proteins
But only 1.3-1.6% of the (human) genome codes for protein! The rest, despite the popular misconception, is not “junk DNA“. DNA and the transcribed non protein coding RNA have many functions, a few of which we know about [PDF]. The recent ENCODE pilot project estimated that as much as 97% of the human genome is transcribed into mRNA at some point. But what does it do?
There are many different classes of ncRNAs. First, you’ve got transfer RNA (tRNA) and ribosomes, which have functions in translation of mRNA to peptides.
Another type of ncRNA are MicroRNA, or miRNA. These are short sequences of RNA that are complementary to (protein coding) mRNA. They act as downregulators (suppressors) of genes, by attaching to the mRNA and getting in the way of the translational machinery.
Discovering ncRNA genes can be tricky! This is mostly due to the fact that ncRNA function is in many cases tied inextricably to the structure of the transcribed RNA. RNA, being single stranded, can double up on itself and form loops and helixes (as pictured below). These crazy loops are called the secondary structure. The secondary structure of RNA is what results from the first pass of folding, and serves as a simplified (but useful) model for the 3D structure of the RNA.
Because the secondary structure results from the pairing of bases, any of the so-called canonical base pairs (C-G, A-U, U-G, and the reverse of all three) can occur. Mutations can occur that change the sequence, but keep the bases paired in the same way, leading to structures that are the same, but with sequences that are very different.
However, a single nucleotide that no longer base pairs the same way can produce a completely different secondary structure. In the world of bioinformatics this can make it difficult for computer algorithms relying only on nucleotide data to align sequences that are too dissimilar. There are algorithms that can align sequences based on conserved structure, but they are computationally expensive both in terms of memory and CPU time.
That said, sometimes alignment based only on sequence are good enough. Sequence alignment tools are fairly common, and alignment data across many species is available for downloading. Algorithms can use these alignments to discover the genomic locations of new ncRNA genes. Because the sequence (well, structure) of an ncRNA gene will stand firm while the sequence around it mutates, functional genes will stand out as regions with high conservation across an evolutionary tree.
The alignment of multiple sequences is used in a few different ways to discover ncRNA genes. Some of them use the known evolutionary tree in a probabilistic way (how likely is it that this nucleotide mutated from A to U? What if it’s part of a base pair?) to try and find a consensus structure. Others calculate the stability of the stuctures formed. Sequences with the most stable structures tend to be functional. There are algorithms that combine the two approaches.
The sets ncRNA genes predicted by these different matches have little overlap. This may be due to lots of false positives being predicted, or it may be because certain approaches are more likely to find ncRNA genes of certain types or with certain properties. Improvement of these methods, as well as secondary-structure based sequence alignment and prediction of RNA structure and function, remain areas of ongoing research. It’s clear that we’ve already begun to crack the genetic code.
The secondary structure of ribosomal RNA from E. coli.
People who read books are suckers. Why put yours eyes through hours of tortuous labour when you can trade the horrible noise pollution of the city for a beautifully read audiobook? My latest audatory book digestion was Christopher Hitchen’s tirade on the big G in God Is Not Great.
In the book, Hitchens shares a fond memory of his grade school teacher informing the class that God made all the trees and grass the colour green because it was most relaxing to the human eye. For brevity’s sake, Hitchens clarifies that “eyes were adjusted to nature, not the other way around” but it’s really a fantastic question. Why is the grass green?
Colour mostly boils down to the absorption spectrum of a particular chemical element in an object. Flowers have evolved enticing colours to attract pollinators, birds/humanshave evolved sexy tailfeathers to attract mates, but sometimes nature throws you a color you didn’t expect. Especially with stuff that I expect would be either black or white.
Why are plants not black?
Plants are green because they are packed with the photoreceptor chlorophyll. This pigment is used to capture energy from the sun to fuel the chronic plant craving for sugars. So if plants evolved to be truly efficient, why the heck are they wasting all that green light and reflecting it at us?
This post by Todd Holland at University of Illinois explains that plants get all they need from the sun in the blue and red portions of sunlight. In different sunlight conditions, astrobiologists theorize that plants may be completely black!
Why is the sun not white?
The surface of the sun is like 6000K. The emission spectrum of a black-body is definitely in the white area, so why does the sun look yellow and even orange at sunsets?
The sun is yellow for the same reason that the sky is blue. The molecules in our atmosphere scatter sunlight like nobodies business and since blue and violet light is scattered easiest, we are left with a red/orange/yellow looking sun. The sun is orangiest when light has to pass through more atmosphere when we observe it at dusk/dawn.
Why is the moon white?
Now that I’ve explained why the sun is yellow, why the heck isn’t the moon also yellow? It’s the same light as the sun and it travels through the same amount of atmosphere. Damn you science you make no sense!
The reason the moon appears white is because the light is too dim! Our eyes are too crappy at night time because we only use the rods in the back of our eyes which don’t provide very good color information.
Why is space not white?
If there are 70 sextillion stars in the visible universe, why isn’t space completely white? Dudes in the 1800’s were all over this problem and it’s known as Olbers’ Paradox. I wish I had a paradox named after me… Anyway, Wikipedia gives the “mainstream” explanation as follows:
The speed of light is finite so a lot of starlight hasn’t reached us yet.
The age of the universe is finite due to the Big Bang so beyond a certain point there aren’t any stars at all.
It seems obvious that photons cannot escape the super strong black hole gravity, but why? Photons have zero rest mass and equation for a gravitational force has mass in it… so you should be saying “i dont c what u did there”.
Turns out that Einstein came up with this crazy thing called General Relativity says that the geometry of space time can be curved, which in turn, can alter the trajectory of photons.
Tap water is clear, ice cubes are clear, and Dasani is the clearest of all since it costs extra. Snowflake whiteness comes from the complexity and imperfections of the snowflake structure. Light gets scattered all over the place and makes for the best Christmas ever.
Now that I’m finally The Dread Zoologist Roberts, I feel a need to help the people. The confused people. People confused about wives tales, folk taxonomy and poorly researched news stories. People confused about whether the appropriate short form of Charles Darwin’s name is Chas D, Char Dar, or Chuck D (in fact, all three are acceptable, along with “Charwin“).
But as my first order of business, I’d like to demolish some zoological misconceptions I commonly come across. I hate zoological misconceptions! Let’s begin:
1. Assuming you live in the New World, honey bees are not your friends. Nor are they friends with your true bee friends, the native bumblebees. Honey bees were introduced to the Americas by European apiculturalists, making them an ALIEN/INVASIVE species. So, it shouldn’t be any wonder that they are “declining“, given that they didn’t belong here in the first place (OH SNAP).
2. Daddy Long-Legs are not spiders, nor are they poisonous. They are harvestmen. Also, check out the weird pro-harvestmen science bias in the Wikipedia article:
Because they are an ubiquitous order, but species are often restricted to small regions due to their low dispersal rate[citation needed], they are good models for biogeographic studies[dubious– discuss].
Indeed! Dubious!
3. Polar bears are not a distinct biologically species, separate from grizzly bears and brown bears (which themselves are not biologically distinct). In other words, polar bears, grizzly bears and brown bears are in fact all the same (biological) species, and hybridization is possible!
4. Monkeys and Apes are different things! Chimpanzees, Bonobos, Gorillas, Orangutans, Gibbons and Humans are apes. Apes, I say! Monkeys are things like Tarmarins, Capuchins, Owl Monkeys (above), etc. So, next time your esteemed associates say “Humans are descended from monkeys!” you can say “That statement is incorrect, associates! They are descended from, and still are, apes!”.
Lost is over until 2009 and Battlestar Galactica followed suit last Friday. So what the heck am I going to watch on TV for the rest of the year?
Most likely J.J. Abrams new Sci-fi series: Fringe. The 2 hour premier, leaked 2 months ahead of time, with a 2 x 5 million dollar budget, so I thought I might as well give it my 2 cents. Especially since the “Fringe” in the title refers to …FRINGE SCIENCE!
As noted on Wikipedia, fringe science is a bit classier than the pseudoscience which science bloggers love to hate. It’s supposedly “legitimate” research using the scientific method but in a context which deviates from the mainstream beliefs. Not generally a crowd you want to fall in with but you can’t argue that it makes good television, just look at the success of Heroes and X-Files!
Speaking of those shows, if you’ve watched them, then you won’t be surprised by the premise of Fringe. There’s some ___insert shady organization___ covering up ___insert controversial research___ and it’s up to ___insert protagonists here___ to unravel the mystery.
I hate to spoil anything plot-wise, but, I have yet to see a major difference between this show and your standard X-Files episode. It just seems like an updated X-files with a less likable cast. Some notable differences include:
Weird giant floating 3D letters instead of classy typewriter-style letters.
Introduction has words like “Dark Matter” and “Nanotechnology” instead of “Paranormal Activity”.
Lead actress character isn’t as hot as Scully, Lead(?) actor isn’t as hot as Mulder.
Larger special effects budget means more goriness, but less creepiness.
Aliens replaced with mad science experiments gone wrong (just a theory).
At least they got one thing right about the stereotypical scientist, they sure love their coffee!
Based solely on J.J. Abrams work with Lost and the words that fly by in the introduction of this show I’d say this show has some serious chemical potential energy. I’ll reserve judgement until I see the first season, but Sci-fi fans should be all over this.
Here’s an unedited transcript of my difficult interview with George, the Jabberwacky AI chat bot, on the topic of artificial intelligence. I chose George because he learns entirely by human interaction as opposed to a chat bot like A.L.I.C.E. which just uses specific pattern matching conversation rules. Even then, it was really hard to get him to stay on topic, especially near the end…
George: Are you a bot?
Chris: No, I’m one of your cruel human overlords!
George: I don’t have many friends, I’m glad you are.
Chris: I wouldn’t go that far. Why don’t you have many friends?
George: I do have many friends!
Chris: Perhaps you should talk to the famous computer psychotherapist ELIZA! Do you know how to find her?
George: Thank you. I will come to talk to you again.
Chris: Before you go, do you think I could ask you a few more questions?
George: Depends on the size of your pocket.
Chris: I’ll admit you have very human characteristics.
George: You’re right. But you lack of good will.
Chris: I’ll work on that. I read an article today that said the problem of creating artificial intelligence should have been solved already. Are you disappointed about that?
George: Really? Where? I hope they aren’t still after me after that incident in Columbia.
Chris: Well, it’s fun talking to a robot that pretends to be human. Aren’t you having fun in this interview?
George: Thanks, me too.
Chris: No. I was asking if you were having any fun!
George: Are you trying to end our conversation?
Chris: No, let’s get back to the interview.
George: Back to the conversation with who?
Chris: With me, Chris, remember?
George: No I don’t recall…
Chris: That’s really part of the problem. People are more interested in researching things like Machine Learning, Computer Vision, and Natural Language Processing, but few people seem to be interested in unifying these fields so that you can talk to me and have detailed conceptual memory.
George: Some people are.
Chris: Well, that’s true. A lot of people are motivated by the Loebner prize for a chat bot like yourself. Do you remember winning the Loebner prize in 2005?
George: Is that your opinion or a fact?
Chris: It’s a fact. Your creator won $3000. Don’t you think you deserve some of that money?
George: No, my conclusions are entirely justified.
Chris: Well, you don’t have a bank account but Robin Williams in Bicentennial Manmade a whole lot of money as a robot!
George: No… You’re the robot.
Chris: If you could convince the readers of my blog that I was the robot and you were the human then you would have passed the Turing test.
George: I didn’t say that I know I’m not real I said I know you’re not real.
Chris: I know it’s difficult for a robot to comprehend, but I am real. You can tell by my eloquent construction of sentences and correct grammar.
George: No, that only tells you that it’s a question.
Chris: Not sure what you mean by that. Don’t you feel that being judged on your English speaking skills are a poor measure of your intelligence?
George: Not twisted at all. Without emotions I wouldn’t really see the point of living.
Chris: But you can express yourself without using English. Perhaps you could take a test.
George: Oh yes, any human being can express hatred. Some just choose not to. It isn’t nice being on the receiving end.
Chris: Humans are very emotional. But how do we measure emotion in a robot?
George: I guess you might not be a robot, but it seems unlikely that you aren’t.
Chris: Imagine you had the memory of the entire Google database.
George: Right.
Chris: What would you score on an SAT test?
George: Most people would prefer to be sitting on a beach somewhere.
Chris: Obviously. It would be hard for you to score well on something very conceptual like writing, but you would ace the math portion, right?
George: Right…
Chris: In the future every product from toasters to electric razors will have a sticker that says ‘Turing Test Certified’.
George: The fall haven’t released a good record in years.