CR Load Image List
object of type ‘NoneType’ has no len()
That error β `object of type ‘NoneType’ has no len()` β is Python telling you:
> “You’re trying to check the length of something that is actually `None`.”
In the context of **CR (possibly ComfyUI Runtime or similar) Load Image List**, it’s likely caused by passing a `None` value where a list or image path list was expected.
### π Common Causes
1. **No images found or loaded**
The function expected a list of image paths but got `None`.
2. **Filepath misconfiguration**
A directory path may be incorrect or empty, returning `None` instead of a list.
3. **Load function returns `None`**
If the image list is generated by a previous node or script and that step failed silently, the result is `None`.
—
### β How to Fix It
**Step 1:** Add a guard clause before the `len()` call:
“`python
if images is None:
raise ValueError(“No images found or loaded.”)
“`
**Step 2:** Confirm the source:
“`python
print(images) # See what you’re actually passing in
“`
**Step 3:** Debug your pipeline:
– Are you passing an actual list of image paths?
– Is the image directory path correct?
– Is the return value of a loader step being passed correctly?
—
### π‘ Example Fix
If using `os.listdir(path)` or a similar loader:
“`python
import os
def load_images(path):
if not os.path.exists(path):
raise FileNotFoundError(“Directory does not exist.”)
files = os.listdir(path)
images = [f for f in files if f.lower().endswith((“.png”, “.jpg”, “.jpeg”))]
return images if images else None # Could return [] instead of None to avoid the error
“`
—
If you want to paste your code or pipeline block, I can pinpoint the exact problem and fix.
what nodes sharpen the image
In **ComfyUI**, to **sharpen an image**, you’re looking for nodes that enhance detail or contrast edges. Hereβs your toolkit:
—
### π οΈ Native ComfyUI Nodes
1. **`Image Sharpness` node** (if installed via certain custom node packs)
– Usually has a `strength` slider.
– Internally uses a convolution kernel or unsharp mask-like effect.
2. **`KSampler` with CFG tweaks**
– Crank up `CFG` or reduce denoise in img2img workflows to retain details β can fake “sharpening.”
—
### π Useful Add-on Node Packs
Install these via GitHub into your `custom_nodes` folder for more control:
#### π¦ `ComfyUI-Impact-Pack`
– **`Impact Sharpener`**: Direct sharpening effect with tunable strength.
– URL: [https://github.com/ltdrdata/ComfyUI-Impact-Pack](https://github.com/ltdrdata/ComfyUI-Impact-Pack)
#### π¦ `ComfyUI-Extra-Models` or `ComfyUI-Advanced-Image-Processing`
– Nodes like **`Unsharp Mask`**, **`High Pass Filter`**, etc.
—
### π§ͺ Hacky DIY (if no nodes available)
You can simulate sharpening via node stacking:
1. **High-pass filter** (extract edges).
2. **Blend** (original + high-pass) using an `Add` or `Overlay` node.
3. Adjust opacity to control strength.
—
### π― TL;DR
If you want plug-and-play sharpening:
– β
Use **Impact Sharpener** (from Impact Pack).
– β
Use **Unsharp Mask** or **High Pass Filter** from advanced packs.
– π οΈ Or blend a high-pass + original image manually.
Want a diagram of this node flow?
what better ways to bring out an image that is blurred a litte
If your image is just *a little* blurred, sharpening alone might not cut it. You’re trying to **restore lost detail** or at least **fake it convincingly**. Here’s your arsenal, ranked from mild enhancement to full-on AI wizardry:
—
### π§ 1. **Mild Fixes (Fast & Simple)**
#### β
**Unsharp Mask**
– Standard in image editing (also in some ComfyUI packs).
– Enhances edges by subtracting a blurred version from the original.
– Low risk of artifacts if used lightly.
#### β
**High Pass Filter + Overlay**
– Extracts detail, then blends it back over the original.
– Fine-tune with blend mode and opacity.
– Great for slight blurs.
—
### π§ 2. **AI/ML-Based Enhancers**
#### π **Real-ESRGAN (Upscaler with sharpness magic)**
– It’s not just for upscaling β it enhances edges and textures.
– Set to `Real-ESRGAN 4x+` or `Anime6B` for crispness.
– Often built into ComfyUI or added via `ComfyUI-ESRGAN`.
#### π **GFPGAN (Face restoration)**
– If the blur is on faces, this is gold.
– Use it *after* upscaling or as a separate node.
#### π₯ **Stable Diffusion Inpainting / Img2Img**
– Take the image and reprocess it using a low `Denoise Strength` (like 0.1β0.3).
– Add a prompt that reinforces clarity (e.g., βhigh detail,β βsharp focus,β βfine textureβ).
—
### π¨ 3. **Hybrid Workflow**
Hereβs a killer flow:
1. π **High Pass Filter**
2. β **Blend with original (Overlay/Add)**
3. π§ **Feed to Real-ESRGAN**
4. β¨ **Optional: Img2Img with SD for clarity pass**
—
### β Avoid These if Subtlety Matters:
– Cranking sharpening too high (halo artifacts).
– Using denoise or compression-heavy upscalers.
– Repeated upscaling/downscaling (destroys fidelity).
—
Want me to build you a sample ComfyUI flow for this?