RenderEffects #3: Utilizing RenderNode for sooner, higher blurring
The earlier two articles coated (in additional element, and in lots of extra phrases), content material that I talked in a short time about within the video Android Graphics that I made with Sumir Kataria for the current Android Developer Summit occasion. This text goes past that content material, although I did contact on it on the finish of a current convention discuss at Droidcon London. So if you would like the video model of this (and the earlier) content material, right here you go:
Within the earlier article, I confirmed how one can created a frosted-glass impact which each blurred and lightened a subsection of an ImageView
to make an image caption extra readable. Right here was the end result, with the enlarged, captioned picture showing over the blurred background picture gallery:
However the blur used within the caption space, whereas legitimate and usable, was neither pretty much as good (as a result of it’s not as blurry) nor as quick as we will get through the use of the platform’s built-in RenderEffect
blur (which is already used within the background blur of the image gallery above, as defined within the first article on this collection). It certain can be good to get a sooner, blurrier model, to assist pop the textual content out in areas like we see above on the finish of the phrase “Ceiling,” the place the black textual content is on prime of different darkish, hard-edged objects within the image.
The rationale I didn’t use the “greatest” strategy for blurring was that it’s… not intuitively apparent how to do this, and I used to be optimizing for clear code and methods. However now that that’s all out of the way in which (within the earlier article), I additionally wished to indicate the way you would possibly obtain the identical impact with the built-in, and higher, blur impact.
The rationale that the blur+frost impact isn’t simple comes right down to the truth that I solely wish to shade a part of that ImageView
by which the enlarged image seems. That’s, I solely wish to blur/frost the label space on the backside, not all the view. RenderEffect
, nonetheless, applies to the complete view; there is no such thing as a solution to crop the impact to only a portion of the view. So after I apply a RenderEffect
blur after which the frosted-glass shader to the ImageView
holding the enlarged image, with code like this:
val blur = RenderEffect.createBlurEffect(
30f, 30f, Shader.TileMode.CLAMP)
val shader = RenderEffect.createRuntimeShaderEffect(
FROSTED_GLASS_SHADER, "inputShader")
val chain = RenderEffect.createChainEffect(blur, shader)
setRenderEffect(chain)
I get this end result:
This extra pronounced blur is nice for coming out that caption textual content higher, however it’s… fairly terrible for the whole lot else. The person in all probability needs to see the image particulars, and the mega blur shouldn’t be serving to. However blurring and frosting solely a portion of the view finally ends up being considerably unobvious with our present APIs.
Okay, what about utilizing a separate View?
You would possibly fairly surprise (as I did when first engaged on the app) why I can’t merely depend on the view hierarchy to assist out. That’s, as an alternative of shading the label space within the bigger ImageView
object, I might use a separate view sized to the caption bounds, sitting over the underside of the ImageView
, similar to the prevailing TextView
which holds the caption.
Actually, I might simply use the TextView
itself. Then I might blur/frost that view as an alternative of the ImageView.
Properly… sure. And no. I imply, I definitely might shade the TextView
and get an analogous impact. Ish. However this additionally applies the impact to the textual content; shaders are run after the View
renders all of its content material into the view — together with the textual content right here — so I find yourself with one thing like this:
Should you look intently on the label space, there are a few issues in comparison with the impact we achieved in shading a portion of the underlying ImageView. As a reminder, right here’s what it ought to seem like:
The obvious challenge is the caption textual content. Within the first picture above, the letters are washed-out. This comes from the issue of the impact being utilized on the complete TextView, together with the textual content characters. This strategy of shading the TextView
straight leads to blurry, frosted textual content, which isn’t actually what we wished.
The opposite drawback is a bit more refined, however very noticeable if you happen to have a look at the triangular mesh of home windows on the suitable facet of the caption space. Within the right model of the picture, that space is clearly (if solely barely) blurred, whereas after we use the shader on the TextView
, it’s not blurred in any respect. This occurs as a result of the shader impact runs solely on the pixels of the view it’s utilized to, not on no matter it occurs to look underneath these pixels on the show. So whereas it seems to be prefer it ought to shade these picture pixels out of your viewing perspective, that’s as a result of they’re rendered on prime of one another on the show. However on the renderer stage, the contents of every view are created dependently, based mostly on that view alone, with out regard to what they are going to sit on prime of when they’re drawn.
So after we apply the shader to a transparent-background TextView
, the one factor that’s truly blurred is the textual content. The clear pixels in that view merely… stay clear. Your entire cause for the impact was to make the textual content extra readable, so this strategy is clearly taking issues within the improper course.
The opposite thought I discussed above, inserting a brand new view between the underlying ImageView
and the top-most TextView
would fail for the same cause. Whereas this system would keep away from the blurry-text artifact when shading the TextView
, it could nonetheless not have the right picture knowledge to use the blur to (since that intermediate view wouldn’t comprise the picture knowledge), so there can be no seen blur. The shader can be utilized to regardless of the colour was in that placeholder view (presumably clear pixels, as within the case of the TextView
instance above).
There may be an extra factor we might do with an intermediate view, the place we draw a cropped copy of the enlarged picture into it, thus giving the shader one thing to truly blur and frost accurately. This might work, for a similar cause that it really works within the ImageView. And we wouldn’t want the cropping logic of the unique shader as a result of we’re shading the entire pixels on this view. But it surely looks like a hack to go about manually cropping the photograph and redrawing that duplicate copy to a different view simply to get this impact.
Perhaps there’s a greater method…
What I’d actually love to do (and what I attempted and did not do after I first wrote my demo app) is to chain results collectively. That’s, I’d like one RenderEffect
utilizing the system blur, through RenderEffect.createBlurEffect()
, the identical as I’m utilizing within the underlying picture gallery container. Then I desire a second impact to use a frosted-glass shader (with out additionally making use of the field blur that’s in my present shader), created with RenderEffect.createRuntimeShaderEffect()
. Then I might composite these results utilizing RenderEffect.createChainedEffect()
, to inform the system to use each results collectively, one after the opposite.
This virtually works… however there is no such thing as a solution to specify the crop space for the label, so it merely provides a blurred/frosted look to all the enlarged picture. Once more, that’s not the look I used to be going for.
So I can’t use chained results. However I can do one thing comparable through the use of two RenderNode
objects to use the consequences manually.
To this point, we now have been making use of our shader to a complete View
. That is highly effective, however has the limitation defined above the place the impact is utilized to, effectively, all the view. So if I need an impact (comparable to a blur) utilized solely selectively, or conditionally, it’s not doable. Or quite, it’s doable, however solely when utilizing shader logic like I’ve within the present blur+frost shader, which checks the placement of the present pixel and runs or skips the impact appropriately. However this per-pixel logic strategy shouldn’t be doable for the opposite RenderEffect
s (blur, chain, and so on).
Nonetheless, I can, as an alternative, apply results to RenderNode
objects as an alternative of View
s, and use these RenderNode
s to attract right into a view selectively, thus reaching the purpose of shading a subset of a View
. RenderNode
has the identical setEffect()
API as View
, so the setup for this strategy could be very comparable.
However wait — what’s a RenderNode
?
To cite from the reference docs on RenderNode
:
RenderNodes are used internally for all Views by default and will not be sometimes used straight.
Oh no, wait — that’s not what I meant to stick (we’re principally undoubtedly going to make use of it straight). Right here, that is higher:
RenderNode is used to construct {hardware} accelerated rendering hierarchies.
Each View
object, sooner or later earlier than its contents seem on the display, information the operations and attributes to attract its contents, for supply to the low-level renderer (Skia). It does this through RenderNode
, which, as of API stage 29, is uncovered as public API that you should utilize straight. That’s, you may cache instructions in a RenderNode
after which draw that node manually, sometimes to a View
.
You’ll often do that by recording your drawing instructions to a RecordingCanvas
, which you will get through RenderNode.recordingCanvas()
. You then draw to that Canvas
the identical as you’d a typical Canvas
object, solely now your drawing instructions are saved within the RenderNode
. You’ll be able to then render that node (with these instructions you saved in it) right into a View
with a name to Canvas.drawRenderNode()
.
Now, again to our blur+shader instance: The thought is to make use of two completely different RenderNode
objects, one to render the underlying picture content material and one to carry the RenderEffect
which blurs that content material. Every will probably be drawn into the View
individually and may be positioned to attract the consequences the place we wish them, which is able to give us the crop/place functionality we wished from RenderEffect
.
Let’s see how this works.
First, we create and cache two RenderNode
objects, which will probably be reused (redrawn) every time the view itself is drawn:
val contentNode = RenderNode("picture")
val blurNode = RenderNode("blur")
(Observe: “picture”
and “blur”
will not be significant or referred to once more; they’re documented as being for debugging functions, presumably internally as there is no such thing as a solution to entry these properties from the objects after they’re handed in.)
Subsequent, we have to override the onDraw()
technique within the ImageView
the place this content material will seem. The aim of overriding onDraw()
is to inject the RenderNode
code on the time after we are drawing the content material of a view. Usually in onDraw()
, we would first name the superclass’s onDraw()
technique to attract the usual content material in a View
. However on this case, we wish to create a RenderNode
to carry that content material, so we’ll draw it there as an alternative, after which use it because the supply to attract from into the View
:
override enjoyable onDraw(canvas: Canvas?) {contentNode.setPosition(0, 0, width, top)
val rnCanvas = contentNode.beginRecording()
tremendous.onDraw(rnCanvas)
contentNode.endRecording()
canvas?.drawRenderNode(contentNode)
// ... remainder of code beneath
}
There are a few issues to notice above: First, we’re making a RecordingCanvas
after which asking the superclass (ImageView
) to attract into that canvas. We then copy that content material into the view’s canvas with a name to drawRenderNode()
. This avoids calling the superclass onDraw()
technique greater than as soon as, which is sweet observe in case there’s additional overhead in that technique which may be prevented by drawing with the cached model of the instructions within the RenderNode
object as an alternative.
Second, notice that I might have referred to as setUseCompositingLayer()
on the RenderNode
if I have been redrawing from that node usually, for functions of pace and effectivity. A compositing layer caches the drawing as a bitmap (as a texture within the GPU), and future drawing operations from it could be easy bitmap (texture) copies, that are very quick. The tradeoff is the additional reminiscence consumed by that texture. On this case, I’m solely drawing the RenderNode
twice; as soon as to the view itself (within the code above) and a second time to the opposite, blurred RenderNode
(within the code beneath). It’s not value caching the node only for that one additional drawing operation. But it surely’s value contemplating in your personal RenderNode
objects, relying on what you’re doing with their content material.
Lastly, we carry out the blur. We do that by drawing from the principle RenderNode
into the blurred node, with an applicable translation. This blurred node has a blur RenderEffect
set on it, and is positioned and sized to be simply the label space that we wish blurred.
// ... remainder of code above blurNode.setRenderEffect(RenderEffect.createBlurEffect(30f, 30f,
Shader.TileMode.CLAMP))
blurNode.setPosition(0, 0, width, 100)
blurNode.translationY = top - 100fval blurCanvas = blurNode.beginRecording()
blurCanvas.translate(0f, -(top - 100f))
blurCanvas.drawRenderNode(contentNode)
blurNode.endRecording()canvas?.drawRenderNode(blurNode)
}
This code units the blur RenderEffect
, with a blur radius of 30 in x and y (much more, and higher, than the measly 5×5 field blur of my earlier shader strategy). Observe that setPosition
creates a a lot smaller measurement than the contentNode
earlier, as a result of we’ll solely want this smaller space for the caption. Additionally notice that the translationY
operation strikes the rendering to the underside of the general view, which is the place the blurred caption lives.
As soon as once more, we get a RecordingCanvas
from this node. Earlier than we draw into it (utilizing contentNode
), we back-translate from the caption location to the highest of the picture; this ensures that the content material is positioned accurately for the bigger picture underneath the smaller/translated caption space. Lastly, as soon as the blurNode
drawing is finished, we render the end result into the view’s canvas (on prime of the prevailing content material from contentNode
) with another name to drawRenderNode()
and we’re finished.
Right here’s the ultimate end result. It’s fairly near what we had initially, however you may see that the blur within the label space is rather more pronounced, which helps with the readability of the caption’s textual content.
That is the top of this present collection (although I reserve the choice to put in writing extra shader and RenderEffect
articles sooner or later. No guarantees). We managed to mess around with the system blur to get a blur-behind impact that helped pop a picture out from the background. Then we added AGSL shader logic to boost the visuals in a caption for the picture. Lastly, we used RenderNode
to reap the benefits of the system blur for a greater (and sooner!) impact, and to simplify the AGSL shader logic to easily present the frosted-glass impact.
There are lots of sources on the market if you wish to know extra about this. Listed here are some to start out with:
RenderEffect
: The category used to create results for blurs, bitmaps, chains and extra. These results are set onViews
orRenderNodes
to alter the way in which these objects are drawn.RenderNode
: The thing which holds the underlying operations used to attract Views, however which will also be used on to document and retailer customized drawing operations.RuntimeShader
: The thing which holds the code for an AGSL shader.- AGSL: Android Graphics Shading Language. Like SkSL, however for Android. It supplies a mechanism for creating very customized per-pixel drawing results.
- SkSL: The shading language for Skia. It’s like GLSL, however for the Skia rendering pipeline.
- GLSL shaders: The language for fragment shaders when utilizing OpenGL.
Mess around! Create neat results! Make higher and extra intuitive person interfaces! Have enjoyable with graphics! That’s what it’s there for!
Due to Nader Jawad for his assist in understanding and implementing the twin RenderNode
method above.