Let's Ray Trace


TheKitchenAbode
 Share

Recommended Posts

1 minute ago, TheKitchenAbode said:

 

Mine too!!!

 

Here is an example. Note that the materials and models here are all CA, any adjustments to their properties are few and when done they are normally just a minor adjustment to their reflectivity. All lighting is done with only spot lights. Renders in about 30 minutes. Just need to change the wood grain direction on the table.

 59f9ec6f92040_Concept5e_lzn.thumb.jpg.f75faca50638bc12a6477698a764d4d4.jpg

I'm a novice. I can't get my images to even come close to this. I typically only render about 20 passes. We really only use our render to sell a job. I never come back to them and we're already providing something the competition in our area aren't. But the perfectionist in side me won't let it go. This is the kitchen I am pitching tomorrow. What can I do to it to achieve what you have on a simple level 

NASO_kitchen.plan.zip

Link to comment
Share on other sites

5 hours ago, BruceBoardman said:

I'm a novice. I can't get my images to even come close to this. I typically only render about 20 passes. We really only use our render to sell a job. I never come back to them and we're already providing something the competition in our area aren't. But the perfectionist in side me won't let it go. This is the kitchen I am pitching tomorrow. What can I do to it to achieve what you have on a simple level 

NASO_kitchen.plan.zip

 

Had a chance to make some adjustments.

 

Here is your original as per your plan settings. 30 passes, 42 minutes.

59fa36ec8aa08_Asperoriginalplan_42min_30passes.thumb.jpg.6328bc112b222e7c99da73e92f6c8051.jpg

 

After my changes. 30 passes, 11 minutes.

59fa371e8fbcc_Abode_11min_30passes.thumb.jpg.bbebf5d8afe489f55cc253f2796d1ff8.jpg

 

After some minor highlight/shadow adjustments and a bit of sharpening.

59fa37763c80b_Untitled4_lzn.thumb.jpg.9655350be45c7bff6362c5a6904400ae.jpg

 

Minor change to the recessed lights, just reduced the cut-off angle. Changed the two point light fixtures to spot light versions, this is what brought the Ray Trace time down from 42 minutes to 11 minutes for the same number of passes. Added the direct sun, made floor polished, adjusted the ambient occlusion and adjusted the image properties. Put a roof and foundation on.

 

Here's the altered plan. Just use this to see the alterations I made as I can't guarantee that something else may have inadvertently changed.

NASO_kitchen_Abode Modified.plan

 

Let me know if you have any questions concerning these changes.

Link to comment
Share on other sites

On 10/20/2017 at 9:36 AM, TheKitchenAbode said:

Speckles(fireflies) in Glass

 

I have taken a closer look at the issue of speckles(fireflies) forming on glass materials. Here are my results, thoughts and recommendations.

 

These annoying speckles seem to occur under one specific scenario. There must be at least two glass layers overlapping each other, photon mapping must be "On" and there must be a region that is viewable through the glass where there is a variance in a level of reflective light coming from the surface.

 

59ea05e0287cf_GlassSpeckles_ForgroundlightsOn_BackgroundLightsOn_100_AO0-7.thumb.jpg.b75ba00ec41ad3be12757d3215a6a85a.jpg

 

In pondering what is really going on here I have concluded the following:

 

The term speckles(fireflies) is likely incorrect. We should be calling this "Black Holes", it seems to be the inverse that's really the issue. Why? when photon mapping is turned on the path of a light ray is computed for 5 iterations(bounces), as glass has a reflective property there will always be a portion of light that will be bounced back to the other opposing layer of glass and then this glass layer will bounce back a portion of that light back. In other words there will always be a portion of light that is essentially trapped between the glass layers in a infinite bouncing back and forth between these opposing glass surfaces. For the camera to see the light ray it must get back to the camera within 5 bounces, if not the camera does not receive that ray of light, no ray of light means black.

 

If you wish to see the best example of this then take two opposing mirrors where one mirror is visible within the other. Run a Ray Trace with photon mapping on and after 5 visible reflections in the mirror the 6th expected reflection will just be black. The engine will not calculate beyond the 5th bounce and a 6th reflection would mean the light ray would not be received by the camera within this 5 bounce limit.

 

What can we do to overcome this?

This one was very challenging, as in all of my tests I was unable to find anything such as material properties, ambient occlusion or Ray Trace duration that would have any significant direct effect. I'm not saying that these won't have any benefit at all, but I found the benefit to be minor and when using these to control the "Black Hole" formation they, being global type settings, also altered many of the unrelated aspects of the scene. When something such as ambient occlusion is being used to address this what you are really doing is just changing the contrast ratio in your scene. If the contrast ratio of the scene is lowered then the difference in luminosity between the brightest regions and darkest regions has been compressed and as such it is less visible, still there just not as obvious.

 

Of coarse one could just turn off photon mapping but this also changes the entire look of your scene. You could also eliminate that background light variance but again you may be compromising your desired look. You could also just avoid double glass layer situations but then again you will restrict the models in your scene, no more glass pendants that's for sure. Personally these are not acceptable trade-offs.

 

All is not Lost!!!

I did however find one and only one procedure that allowed me to maintain all of my desired scene properties while at the same time make those "Black Holes" collapse.

 

"Run the scene at a higher resolution and then resize it down."

 

15 passes run at 1200px X 600px, my normal print size.

59ea05e0287cf_GlassSpeckles_ForgroundlightsOn_BackgroundLightsOn_100_AO0-7.thumb.jpg.b75ba00ec41ad3be12757d3215a6a85a.jpg

 

15 passes run at 4800px X 2400px, resized back to 1200px X 600px.

59ea05f36b70f_GlassSpeckles_ForgroundlightsOn_BackgroundLightsOn_100_AO0-7_4800X2400_188min_1200X600.thumb.jpg.c04ff37ca52522f2c04e6c16b59fd6d9.jpg

 

15 passes run at 9600px X 4800px, resized back to 1200px X 600px.

59ea060d82930_GlassSpeckles_ForgroundlightsOn_BackgroundLightsOn_100_AO0-7_9600X4800_188min_1200X600.thumb.jpg.aef426bb975242210d177fba47995d96.jpg

 

Why does this work. I believe it's all in the resizing algorithms. From a simplistic view point, when you downsize a pic the algorithms must make as best a pic as possible with fewer pixels. To do this they analyze two or more adjacent pixels and extrapolate a new single pixel that best represents their blending together. If the two original adjacent pixels have a significant variance in luminosity then the new single pixel will likely be halfway in the luminosity difference between the two original pixels, It can perform a bit like a very sophisticated contrast control that works on a pixel by pixel based analysis.

 

There is however one price to pay for this, as you increase your Ray Trace pixel count it will take longer for the engine to calculate those additional pixels. Double the pixel width and height and the time per pass will quadruple as there are now 4 times the number of pixels to process.

 

The first scene above took about 12 minutes at 1200 X 600

The second scene above took about 47 minutes at 4800 X 2400

The third scene above took about 188 minutes at 9600 X 4800

 

This negative time impact can be reduced due to the fact that increasing the number of pixels often reduces the number of passes needed to generate a clean scene when resizing down and as such, though you pay a per pass time penalty there is some compensation as fewer passes may be needed. Total time = time per pass X the number of passes.

 

Resized 9600 X 4800, 15 passes, 188 minutes.

59ea1e2e170a6_GlassSpeckles_ForgroundlightsOn_BackgroundLightsOn_100_AO0-7_9600X4800_188min_1200X600.thumb.jpg.b507dc1fd143b6d78422b724b71771ca.jpg

 

Resized 9600 X 4800, 3 passes, 37 minutes.

59ea244b8ed9f_GlassSpeckles_ForgroundlightsOn_BackgroundLightsOn_100_AO0-7_9600X4800_37min_1200X600_3PassesPS.thumb.jpg.e3c83fd23d5735235f5ac170f3d620d6.jpg

 

Original 1200 X 600, 15 passes, 12 minutes.

59ea24d4d6540_GlassSpeckles_ForgroundlightsOn_BackgroundLightsOn_100_AO0-7.thumb.jpg.aef68375a7bade664442aa6071e972a5.jpg

 

Though I was not able to get my time down to the originals 12 minutes, I was able to reduce the first high resolution run of 15 passes 188 minutes down to 3 passes in 37 minutes while maintaining about 90% of the longer hi resolution scenes quality. Though not perfect it is significantly better than the bottom regular pixel sized pic.

 

Hope this proves helpful.

 

 

 

By increasing the resolution of a Fixed-width image, (like switching from a crop sensor to a full frame and then stepping forward a few paces) CA effectively increases the pixel density, which means more sampling. Higher sampling will equal less fireflies(“noise”) and a longer trace. Falloff SHOULD be related to fall off of a light source. A light only projects so far, including reflecting or refracting. Try your tests with "custom" light settings, adjusting for lower attenuation to decrease the fall off. Theoretically, this should effect your number-of-bounces theory. Though I don't count on it, the raytrace engine severely lacks as a biased engine.

In typical trace engines, the closer a light source is to an object the greater the interference, conversely, the further an object is from a light source the more sampling is needed to effectively “find” its geometry. This is compounded when you introduce a reflective material to the geometry, and compounded again for a refraction. The goal should be to create geometry that diffuses incoming light but serves as passive emitters(looks like a lit bulb) with the direct emitter uninterrupted by geometry(below the bulb and invisible)

Additionally, surface area and poly count of an emitter adds to the complication of the trace. In the case of your traces, the flat recessed lights do well, but the distance from the glass is part of the problem(or should be by all typical logic)

what happens when you make the glass a transparent material instead of a crown glass and then set it to be slightly emissive? the raytrace engine, in theory, needs a light source to figure out the complications of the glass. By setting it to be a light source, however faint, it should help reduce the complicated refractions(caustics)

A raytrace after all only works with light. No light=no image

 

Ideally a scene needs enough light to simplify the interaction with geometry without adding too much light which adds to the number of traces. Reflective and transparent material add more complication.

CA forces a light into a scene when no light is present because otherwise you would see nothing. 

The most realistic scenes often times have the fewest direct light sources, which is the primary reason for Ambient Occlusion in most cases

 

a couple of settings CA needs desperately: user designated sampling rates, clamp settings, direct emitter material options, invisible emitter(for varying size poly slabs) to name a few.

You would really find some supremely useful info here: its not just a manual, its a rendering guide for all softwares

http://www.thearender.com/versions/TheaRenderManual_v1.5.pdf

Link to comment
Share on other sites

Hi Rene, some very interesting thoughts, much appreciated. Not sure whether you are agreeing or disagreeing with my observations and conclusions but either way I welcome the input.

 

I will attempt to expand a bit on my thinking as it relates to the issues and concepts you have presented.

 

Increasing my resolution to reduce so called fireflies.

Although what I am doing here could be considered oversampling I'm not 100% convinced that this is actually what is behind the improvement. Oversampling as I'm sure we all agree is a method used to improve accuracy in data sets. Sampling many individual data sets, of which each has some degree of inaccuracy, and then averaging them results in data sets with less deviational error than original data set group. However, I'm not certain that these glass related fireflies are directly due to inaccuracy. As I attempted to relay, I believe these are the result of a lack of information due to a ray(s) of light being trapped between two reflective surfaces in which within the path/bounce limitation in Ray Trace they are not getting back to the camera and as such they are depicted as black, hence my "Black Hole" name. Another way to gain a perspective on this is to consider a scene where on a wall there hangs a painting, the wall is 100% white and the painting is 100% black. All of the light rays striking the wall will be sent back to the camera while the light rays striking the black painting will be completely absorbed and nothing is sent back to the camera. The Ray Trace result is a white wall with a black painting, which in this case is exactly what it should be.

 

Based upon the above, I concluded that at the end of the day the Ray Trace engine has only the number of pixels as defined by the render window size to represent the received rays of light. In other words, no matter how many Light Rays it is computing behind the scene it still comes down to one available window pixel per ray of light. Therefore, if say my defined window size is say 100px X 100px all light rays must be interpreted within 1,000 pixels, you could say that this means there are only 1,000 rays of light depicting the scene. Now, if within those 1,000 rays of light there are say 10 rays that did not get back to the camera then those pixels will be black, given that there are only 1,000 available pixels then the 10 black ones will be very noticeable. If I double the available window pixels and assume that the total missing rays is still 10 then as a percentage of pixels the missing light ray pixels will be less and therefore less noticeable, they are however still there just harder find. Though effective, the drawback is that each time you double the window pixel size height and width the time to trace quadruples, even when I ran a scene at 9,600 X 4,600 the black holes, though significantly reduced, could still be seen but the scene took over three hours for just 15 passes. This is where I deduced that by taking the higher pixel count scene and downsizing it I could more efficiently obscure the black pixels as in the downsizing process multiple pixels are analyzed and new ones are extrapolated, this would effectively blend out those black pixels. Not much different than going into Photoshop and blurring one pixel with an adjacent pixel(s).

 

Now this could be considered a form of oversampling, but if one believes in my theory then it's not really that. There is no inaccuracy per say in the data, what I'm actually doing is attempting to reduce the effect of a lack of data; the black holes are where there is no data and they need to be filled in with something or lessened in some manner so they represent such a small percentage of the visible scene that the go unnoticed.

 

Will comment on those other thoughts later as I have a site meeting to attend to.

Link to comment
Share on other sites

2 hours ago, TheKitchenAbode said:

 

Your dedication to this is appreciated as well.

To try and clarify, in direct relation to your black picture/white wall scenario...a biased engine does not see in colors, it works a scene as a clay render, with objects given a material identification to be applied afterward. Reflection of surface A off of surface B is grabbed from the object ID of A, giving the reflecting surface B, the material ID A, so as to reflect the correct material. Additionally, a scene is not being illuminated by a raytrace, but rather the path of light is being calculated. Add complication to the calculation and it will take longer for a raytrace to trace the path of light. There is some serious complication when we add multiple light sources at great distance from a round, transparent, reflective, and refractive surface. Because of this complication the trace engine cannot trace all of the paths of light with accuracy in the given time so it renders those places black or white dependent on the traces current theory. We know this to be true as increasing the length of time decreases the noise, with some noise never being calculated. The reason it never is calculated, in broad strokes/laymen terms, is that the pixel being sampled has too many variations leaving no viable solution. Some trace engines allow you to change the color of a scene after a trace as a result of this workflow(colimo for instance) which even allows you to change the reflectivity of those surfaces or further illuminating the scene...and this is AFTER the raytrace, leaving the work intact. This is because the path of light has been traced, so adjusting the light or the surfaces doesn't require a new trace.

Resolution and sampling is to say that 1 foot, square-cut of a photo at 4000 resolution has greater accuracy than a 1000 resolution in the way that the 1000 resolution may determine that the 1 foot square is black or white whereas the 4000 resolution can determine that 3 pieces are black and the fourth being white. In correlation with the trace engine, resolution gives greater accuracy to the trace by increasing the pixel density..giving it more options in its calcs.

Link to comment
Share on other sites

Really great stuff Rene. There is no doubt that there is a significant number of computations and a lot of complexity involved in the rendering process, way beyond my current focus on just the path of a single light ray. However, it all starts and everything else is derived based upon the path of that single ray of light, without that you have nothing to calculate upon. This I believe is the crux of the problem concerning the Black Holes. The light path the rendering engine needs to work it's magic does not in Ray Trace exist, that all important ray is trapped as it can't escape within the path limitation imposed within the Ray Trace engine. I don't consider what's going on in this situation as noise.

 

 

 

Link to comment
Share on other sites

Rene - Concerning fall-off(drop-off-rate). I did experiment with this and as would be expected it did have some positive effect. Though I did not discuss this at the time I categorize it in my indirect grouping as when this is changed it also impacts on other items in the scene, not just the Black Holes. It really does not do much more than just turning down a lights intensity except that the intensity drop is relevant to the light rays distance from the originating source. Also, the effect was not significant in my test set-up. This may have been due to the relative distance of the glass fixture from the aggravating light, possibly too close for the drop rate, but I did try some ridiculously high numbers.

 

All I can say for sure at this time is that to-date, my theory right or wrong, has reliably predicted if a proposed action would have an effect or not; and so far the only reliable and predictable direct method I have been able to isolate is to use the downsizing technique. It attacks the Black Holes without compromising any other setting.

Link to comment
Share on other sites

My "Black Hole" theory - based on my assumptions as to what could be happening here I checked out the coding instructions/tutorials related to PovRay. Many aspects of PovRay should be relevant as the CA RayTrace engine is fundamentally based upon this tracer.

 

These are the declared Global settings, I have eliminated the unrelated ones for clarification purposes.

 

2.3.11.4 Global settings

#declare MaxRecLev = 5;
#declare BGColor = <0,0,0>;

MaxRecLev limits the maximum number of recursive reflections the code will calculate. 

BGColor defines the color of the rays which do not hit anything. It is equivalent to the background block of POV-Ray.

 

2.3.11.8.8 Reflection Calculation
    // Reflection:
    #if(recLev < MaxRecLev & Coord[closest][1].y > 0)
      #local Pixel = 
        Pixel + Trace(IP, Refl, recLev+1)*Coord[closest][1].y;
    #end

This is where the recursive call happens (the macro calls itself). The recursion level (recLev) is increased by one for the next call so that somewhere down the line, the series of Trace() calls will know to stop (preventing a ray from bouncing back and forth forever between two mirrors). This is basically how the max_trace_level global setting works in POV-Ray.

2.3.11.8.2 If the ray doesn't hit anything
  // If not found, return background color:
  #if(closest = ObjAmnt)
    #local Pixel = BGColor;

If the ray did not hit any sphere, what we do is just to return the bacground color (defined by the BGColor identifier).

 

As you can see the number of bounces/reflections is defined through "MaxRecLev", it is set to "5". The default background color "BGColor" is set to RGB 0,0,0 (Black). In the reflections calculation they compute 5 iterations, if the light ray does not return to the camera then it assigns the background colour to it, which is black.


 
Link to comment
Share on other sites

2 hours ago, TheKitchenAbode said:

My "Black Hole" theory - based on my assumptions as to what could be happening here I checked out the coding instructions/tutorials related to PovRay. Many aspects of PovRay should be relevant as the CA RayTrace engine is fundamentally based upon this tracer.

 

These are the declared Global settings, I have eliminated the unrelated ones for clarification purposes.

 

2.3.11.4 Global settings


#declare MaxRecLev = 5;
#declare BGColor = <0,0,0>;

MaxRecLev limits the maximum number of recursive reflections the code will calculate. 

BGColor defines the color of the rays which do not hit anything. It is equivalent to the background block of POV-Ray.

 

2.3.11.8.8 Reflection Calculation

    // Reflection:
    #if(recLev < MaxRecLev & Coord[closest][1].y > 0)
      #local Pixel = 
        Pixel + Trace(IP, Refl, recLev+1)*Coord[closest][1].y;
    #end

This is where the recursive call happens (the macro calls itself). The recursion level (recLev) is increased by one for the next call so that somewhere down the line, the series of Trace() calls will know to stop (preventing a ray from bouncing back and forth forever between two mirrors). This is basically how the max_trace_level global setting works in POV-Ray.

2.3.11.8.2 If the ray doesn't hit anything

  // If not found, return background color:
  #if(closest = ObjAmnt)
    #local Pixel = BGColor;

If the ray did not hit any sphere, what we do is just to return the bacground color (defined by the BGColor identifier).

 

As you can see the number of bounces/reflections is defined through "MaxRecLev", it is set to "5". The default background color "BGColor" is set to RGB 0,0,0 (Black). In the reflections calculation they compute 5 iterations, if the light ray does not return to the camera then it assigns the background colour to it, which is black.



 

GOOD FIND!

well this is just selling my consistent pestering about the trace engine needing an update. CA should partner with established companies like lumion/artlantis/thea instead of trying to keep raytrace alive. POVray is even better than our raytrace IMHO. People spend so many hours fussing with lights in raytrace when great renders can be had in a mere hour with the other rendering softwares

Link to comment
Share on other sites

19 minutes ago, Renerabbitt said:

GOOD FIND!

well this is just selling my consistent pestering about the trace engine needing an update. CA should partner with established companies like lumion/artlantis/thea instead of trying to keep raytrace alive. POVray is even better than our raytrace IMHO. People spend so many hours fussing with lights in raytrace when great renders can be had in a mere hour with the other rendering softwares

 

Thanks Rene - I fully agree and have suggested in past postings that CA should partner with such a rendering specialist and put his whole issue to bed once and for all.

 

In the meantime, I did recently suggest that they at least add an additional entry box in the Ray Trace DBX that would allow us to set the MaxRecLev to whatever level we desire, no different than setting the number of cores we wish to be utilized. Given that this is a global declared variable it should be child's play for the software engineer. If I could get at the code I would do it myself, probably take about 5 minutes.

Link to comment
Share on other sites

Last but not least is so called Biased versus UnBiased methodologies. This I no is a very tenacious subject, but what the heck!
 
I have done some fairly intense research on this, not only what the differing rendering software providers have to say but also on some abstract summaries. I can only conclude from this that though biased and unbiased differ in their approach, the general public tends to place too much emphasizes on this as it relates to rendering quality. Yes, theoretically the unbiased approach should produce a superior rendering. What is not often mentioned, depending upon which side of the fence you sit, is at what point in time does this potential superiority become noticeable and even if noticeable does it actually make one render superior to another from an aesthetic point of view.
 
Please keep-in-mind that I'm talking about the core primary methodologies here, not the fact that these rendering software packages may have many other added feature sets which are often unintentionally considered as part of the biased versus unbiased conversation. First, these two terms are not really technically correct, which unfortunately is not often realized by most and the software companies don't really care as these terms provide them with an emotional interpretation that enables them to differentiate their products from each other.
 
Biased implies that this method intentionally alters/injects something into something to impart/change something from what it should be to something else. I take a pic with my camera, I have a preference for punchy color, so I always boost the saturation to obtain this look, I have introduced a color saturation biased. This however is not what is happening in a so called biased approach. Both approaches are attempting to produce as accurate a representation as possible; the fundimental difference relates to how far one needs to go to get there.
 
What it all comes down to is that a Biased approach operates under the principle that there is a point where continuing to perform a computation will not result in a precievable improvement in the quality of the image and therefore it serves no practicle purpose in continuing to do so. Unbiased on the otherhand does not impose this computational cut-off point and as such will continue to perform the computation indefinitely.
 
There is also another commonly used term called "Accuracy" and that one method is more accurate than the other. What they fail to expand on is "Accurate to What?" When they reference accuracy they are not really saying they are accurate as it relates to how something would appear to the eye in real life under real lighting conditions. What they are stating is that the scene will be accurate according to the lighting that you have defined in your scene. If the ultimate test of Accuracy is real life then regardless of the engine type, if you make any light alteration other than what could naturaly occur then you are infact introducing a Biased.
 
It's truley ironic because at the end of the day what we really want is a rendering program the provides us with the greatest capability to introduce and control Biased. That's the only way you can generate those beautiful artistic scenes. That's why photography professionals gravitate to PhotoShop it provides the ability to defy nature so you can incorporate your own personal interpretation/expression of the scene. Painters do this all the time, they don't paint a technically accurate scene, through their brush strokes and color modifications they reconstruct a new scene that in many cases looks nothing like the original referenced scene, but by injecting Biased they have created something far more expressive and intriguing. Why would I go to an art gallery to look at a tree that looks the same as the one in my front yard?
Link to comment
Share on other sites

Thought it might be interesting to find a comparison to demonstrate the degree of improvement that can be obtained as one endeavors to get a handle on how the Ray Trace engine functions and how to balance the lighting and materials.

 

Here is one of my typical Ray Traces from 2013.

5a01b5cfe0507_Kitchen41200.thumb.jpg.4b97f60cc8a182a46577b3cd240ccad6.jpg

 

Here is a typical Ray Trace in 2017.

5a01b6047a12d_Untitled7_PS_lzn3.thumb.jpg.397f8b3cca5e1d43b9a977b47749a43e.jpg

Link to comment
Share on other sites

Lighting - Point Lights Versus Spot Lights

 

In CA the two most used types of lights are point lights and spot lights. Point lights provide a single point where the light spreads out evenly over 360 degrees, they typically represent the type of light of a standard light bulb and are most often used in light fixtures such as wall sconces, chandeliers, table lamps and so forth. Spot lights are directional, their light spread is dictated by the set cut-off angle which creates a cone effect with it's point closest to the light source and spreading outward. These represent they type of light one would expect from a typical recessed ceiling fixture.

 

With point lights as with all lights you can control the intensity and whether or not to have shadows. Concerning shadows, point lights have the added capacity to produce what's termed as soft shadows. These are shadow effects that are less intense than normal shadows.

 

Spot lights allow for intensity and shadows but they do not have a specific soft shadow feature. Instead, they have a drop rate function that permits you to set how much the lights intensity diminishes as it gets further away from it's source. Setting this higher when shadows are turned on will effectively reduce the shadows intensity as a lower level of light will naturally create a less intense shadow.

 

There is also one very important item to consider between these two types of light, their impact on Ray Trace time. Point lights require considerably more computational time than spot lights and this time is directly related to the number of active point lights in your plan regardless of whether they are visible in the scene.

 

To demonstrate this I have taken a basic wall sconce from the library, it's default light source is a single point light. I duplicated it 34 times to demonstrate the impact of having many point lights in a plan.

 

Point Light Sconces_25 passes_118 minutes.

5a0332def2ed9_PointSconces_ShadowsOn_25passes_1hour28minutes.thumb.jpg.19f59a7ea6c521d0e8e3d57f71a6905a.jpg

 

To provide a comparison I took the same fixture and replaced it's point light with a spot light, made some adjustment to get a reasonable representation of the point lights effect.

 

Spot Light Sconces_25 passes_5 minutes.

5a0332f3f27c4_SpotSconces_ShadowsOn_25passes_5minutes_Final.thumb.jpg.2e477ecff6d1d42a2d761dbb2fd436a2.jpg

 

As you can see, the effect on Ray Trace time is substantial, the point light wall sconce version took 18 times longer to render than the spot light versions.

 

And in my opinion the spot light version is likely more representative of how this type of fixture would appear, it has a clear glass shade and light would not just be emitted from the top only.

 

Hope this helps.

 

 

Link to comment
Share on other sites

  • 2 weeks later...

I left a RT running overnight.  When I got back to it, it had completed 3000 passes over 13 hours !!!  And those were in 'quick' mode !!!

 

I watched it for a few passes before going to bed and saw some rendering, I think (living room, far left in picture), but nothing after those initial passes and rendering after 3000 passes was the same as after a few minutes.

 

Original:Original.thumb.jpg.fd4c7840e93e9848704d959fa7042f9d.jpg

After 3000 passes: 5a0dda7059c51_RT171116.thumb.jpg.2903479137320920e6a7290db6f98792.jpg

 

Obviously this RT was looping, but why? Anyone ever experienced this?

 

I began reading this thread but had to quit early: too much excellent information, need to get back to this thread later though.

 

Here are the parameters used:5a0dda7256010_RTimageproperties.thumb.png.83b894ac785c3eb287d4ea40ce519b05.png5a0dda7315c68_RTimagesize(assistant).thumb.png.930ef219a92869b6010dc84b424b115c.png5a0dda710a573_RTcurrentview.thumb.png.429e5818de9c14f105bf3f15e275452e.png5a0dda71ac732_RTfocalblur(assistant).thumb.png.9734ed7e951e733cb16e9da6b53a7c1f.png

Link to comment
Share on other sites

5 hours ago, cv2702 said:

I left a RT running overnight.  When I got back to it, it had completed 3000 passes over 13 hours !!!  And those were in 'quick' mode !!!

 

I watched it for a few passes before going to bed and saw some rendering, I think (living room, far left in picture), but nothing after those initial passes and rendering after 3000 passes was the same as after a few minutes.

 

Original:Original.thumb.jpg.fd4c7840e93e9848704d959fa7042f9d.jpg

After 3000 passes: 5a0dda7059c51_RT171116.thumb.jpg.2903479137320920e6a7290db6f98792.jpg

 

Obviously this RT was looping, but why? Anyone ever experienced this?

 

I began reading this thread but had to quit early: too much excellent information, need to get back to this thread later though.

 

Here are the parameters used:5a0dda7256010_RTimageproperties.thumb.png.83b894ac785c3eb287d4ea40ce519b05.png5a0dda7315c68_RTimagesize(assistant).thumb.png.930ef219a92869b6010dc84b424b115c.png5a0dda710a573_RTcurrentview.thumb.png.429e5818de9c14f105bf3f15e275452e.png5a0dda71ac732_RTfocalblur(assistant).thumb.png.9734ed7e951e733cb16e9da6b53a7c1f.png

 

 

You put no Limit on the number of passes , so it just kept going till you stopped it ...the setting is in Pic no# 3 , but is Under the General Tab in the RT DBX...

 

Try it with Photons On and about 12-18 Passes at 1280x720 and see how it looks....

 

M.

  • Upvote 1
Link to comment
Share on other sites

17 hours ago, Kbird1 said:

 

 

You put no Limit on the number of passes , so it just kept going till you stopped it ...the setting is in Pic no# 3 , but is Under the General Tab in the RT DBX...

 

Try it with Photons On and about 12-18 Passes at 1280x720 and see how it looks....

 

M.

Thanks!  I wanted to do a benchmark with no extra weight.  I guess I went overboard and didn't realize I had left the number of passes unlimited.

Link to comment
Share on other sites

  • 1 month later...

Hey guys. I am very new to CA and I only use it for the remodeling company I work for. I am trying to get more photo realistic ray traces, just like everyone else, and I have played around with different settings, but I just can't seem to get it down. I have attached my most recent one. I had it set to 20 passes, with photon on (which I usually turn off because it takes so long), I believe my uniform intensity is 0.0 & I do have caustics on. My question is, why on earth after 20 passes is it so static-y & what is up with the strange light above my cabinet crown :(

3d6.jpg

Link to comment
Share on other sites

The light above the cabinet is referred to as light bleed. Make sure to have a roof on you model and a foundation with a floor. If the bleed still persist you will need to reduce the sun's intensity or try some different ambient occlusion settings. I would suggest turning Caustics Off, won't due much for the scene unless you have glass light fixtures, having it off will reduce the time to Ray Trace. The grain(static) can be caused by a variety of things, lot's of polished materials and/or certain ambient occlusion settings. It will usually cleanup but might take more passes.

Link to comment
Share on other sites

24 minutes ago, GreenBeans said:

The Kitchen Abode in thread #111 your 2017 result look amazing. How can I get closer to these results. I realize that is a loaded question, but every little bit will help. :)

 

Thanks GreenBeans. I suggest you read through the postings and comments in this and experiment with these suggestions. It's important to realize that there is no single one-click to a great Ray Trace, it's the culmination of many things. Also, keep in mind that the suggestions should not be taken as the only means to obtain a good result, nothing is ever carved in stone and there are only what I have found to work for me. 

 

The most important thing is to explore one item at a time so you can get a handle on it's effect. Changing too many things at once will make it difficult to gain this most important knowledge.

 

You should fill in your signature so others will know what version of CA you are using and the type of computer system you are using. This is will help when others are attempting to assist you.

 

The best assistance will be when you have a question concerning a specific item, it's very difficult or impossible to answer a generalized question such as "how do I get a good Ray Trace".

Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share