Web scraping with bs4 fails with error 403
import requests from bs4 import BeautifulSoup
# Send an HTTP request to the website and get the HTML response —> headers = {“user-agent”: “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.182 Safari/537.36”} url = “https://siebeneicher.com” # response = requests.get(url) response = requests.get(url,headers=headers)
OLLAMA server and directory
Nextcloud client on Synology NAS
Your WebDAV URL is: example.com/nextcloud/remote.php/dav/files/USERNAME
(replace USERNAME with your username on your nextcloud instance)
Install ‘Cloud Sync’. Add an Agent Select ‘WebDAV’ Add the URL for your NextCloud WebDAV link and credentials Select where to sync to (I created a share for that with a folder structure like “name\nextcloud” in it) The rest of the options should self explanitory.
Colima Docker and Python
Autogen RuntimeError: Code execution is set to be run in docker (default behaviour) but docker is not running. The options available are: – Make sure docker is running (advised approach for code execution) – Set “use_docker”: False in code_execution_config
This may help: sudo ln -sf $HOME/.colima/default/docker.sock /var/run/docker.sock
Large Language Models Database
This is an fantastic website
> www.hardware-corner.net/llm-database/

AI: Offline Quality Image-to-Text runs on Laptop
Image Management with Quantized LLMs: A Leap in Efficiency, Accessibility and Privacy
The challenge of managing extensive digital image libraries is a universal one, efficiently organizing and retrieving images from large collections is a task that transcends professional boundaries. The advent of quantized Large Language Models (LLMs), particularly the LLaMA model, has introduced a groundbreaking solution for “Image to Text” that is far beyond keywording and both efficient and accessible, even on standard computing hardware like a MacBook Pro.
To keep the whole process ‘in-house’ has significant privacy and confidentiality benefits.
Another often-overlooked aspect of digital image management, particularly crucial for website design and content creation, is (web) accessibility for the visually impaired. Image captions, which provide a textual description of the visual content, are essential for making content more inclusive.
The Universal Challenge of Image Management
Recognising image contents (Image to Text) in a searchable and interpretable format is the next leap. The need for an automated, efficient, and privacy-conscious solution is widely felt. However, the resource requirements of large language models were often a limiting factor as either considerable in-house investment was required or data had to be entrusted to external service providers.
The Power of Quantized LLMs in Image Processing
Quantized LLMs, such as the LLaMA model, can represent a significant advancement for digital asset management. Model quantization is a technique used to reduce the size of large neural networks by modifying the precision of their weights. This process involves converting the weights of the model from higher precision data types (like float32) to lower-precision ones (like INT4), effectively shrinking the model’s size and making it feasible to run on less powerful hardware, even on a Laptop like a MacBook Pro with 16GB of memory which was used for this demonstration.
Key Benefits of Quantization for Image Management
- Reduced Hardware Demands: By lowering the precision of the model’s weights, quantization allows the LLaMA model to run efficiently on commonly available hardware, making this technology more accessible.
- Maintained Performance: Despite the reduction in size, quantized models like LLaMA maintain a high level of accuracy and capability, crucial for detailed image description and organization.
- Enhanced Privacy: Local processing of images with quantized LLMs ensures that sensitive data remains within the user’s system, addressing major privacy concerns.
- Time Efficiency: The script processes images in about 15 seconds each, a testament to the efficiency of quantized models in handling complex tasks quickly.
Practical Application and Efficiency
A script has been developed that leverages a Large Language Model to automatically generate and embed detailed descriptions into images from various sources, including files, directories, and URLs. This tool processes both RAW and standard image formats, converting them as needed and storing the AI-generated content descriptions in both text files and image metadata (XMP files) for enhanced content recognition and management. The practical application of this script on a MacBook Pro demonstrates the efficiency of quantized LLMs. The balance between performance and resource requirements means that advanced image processing and organization are now more accessible than ever. Batch processing of 1000 local images files took approximately 15 seconds per image.
Script utilising llama.cpp for image library management
#!/bin/bash # Enhanced script to describe an image and handle various input/output methods # file, path-to-files, url # requires exiftools, llama.cpp # User should set these paths before running the script LLAVA_BIN="YOUR_PATH_TO_LLAVA_CLI" MODELS_DIR="YOUR_PATH_TO_MODELS_DIR" MODEL="YOUR_MODEL_NAME" MMPROJ="YOUR_MMPROJ_NAME" TOKENS=256 THREADS=8 MTEMP=0.1 MPROMPT="Describe the image in as much detail as possible." MCONTEXT=2048 GPULAYERS=50 # Function to process an image file process_image() { local image_file=$1 local output_file="${image_file%.*}.txt" local xmp_file="${image_file%.*}.xmp" OUTPUT="$(${LLAVA_BIN} -m ${MODELS_DIR}/${MODEL} --mmproj ${MODELS_DIR}/${MMPROJ} --threads ${THREADS} --temp ${MTEMP} --prompt "${MPROMPT}" --image "${image_file}" --n-gpu-layers ${GPULAYERS} --ctx-size ${MCONTEXT} --n-predict ${TOKENS})" RES=$(echo "$OUTPUT" | awk '/ per image patch\)/{p=1;next} p') # Remove leading and trailing whitespace RES="${RES#"${RES%%[![:space:]]*}"}" RES="${RES%"${RES##*[![:space:]]}"}" # Output handling if [[ $input_source == "file" ]]; then echo "$RES" > "$output_file" # Check if XMP file exists, if not create it if [[ ! -f "$xmp_file" ]]; then exiftool -xmp -o "$xmp_file" "$image_file" fi # Write the description to the XMP file if [[ -f "$xmp_file" ]]; then exiftool -XMP-dc:Description="$RES" "$xmp_file" else exiftool -XMP-dc:Description="$RES" "$image_file" fi elif [[ $input_source == "url" ]]; then echo "$RES" | pbcopy fi } # Export the function so it's available in subshells export -f process_image # Function to process a directory process_directory() { local dir=$1 while IFS= read -r -d '' file; do # Check if MODELS_DIR exists if [[ ! -d "$MODELS_DIR" ]]; then echo "Error: MODELS_DIR ($MODELS_DIR) not found. Exiting." exit 1 fi process_image "$file" "file" done < <(find "$dir" -type f \( -iname "*.jpg" -o -iname "*.png" -o -iname "*.gif" -o -iname "*.tiff" \) -print0) } # Function to download and process an image from a URL process_url() { local url=$1 local temp_dir=$(mktemp -d) local temp_file="${temp_dir}/image" curl -s -o "$temp_file" "$url" && process_image "$temp_file" "url" rm -rf "$temp_dir" } # Main script logic input=$1 if [[ -f $input ]]; then input_source="file" process_image "$input" elif [[ -d $input ]]; then input_source="file" process_directory "$input" elif [[ $input =~ ^https?:// ]]; then input_source="url" process_url "$input" else echo "Invalid input. Please provide a valid file, directory, or URL." exit 1 fi
Examples
Example Image 1:
Model Generated Description: The image features a train station with a large train parked on the tracks. The train spans a significant portion of the scene, stretching from the left to the right side. There are several other trains visible in the background, indicating a busy train station.
In addition to the trains, there are multiple people scattered throughout the scene, likely waiting for their trains or attending to other tasks at the station. Some of these individuals are closer to the foreground, while others are further back in the scene.
The overall atmosphere of the image suggests a bustling and active train station, with various trains and people contributing to the busy environment.
Example Image 2:
Model Generated Description: The image features a large statue of a cat holding a fork and knife in its paws. The cat statue is positioned in the middle of the scene, with the fork and knife held prominently in its hands. The statue appears to be made of metal, giving it a unique and artistic appearance.In the background, there are trees visible, adding a natural element to the scene. The combination of the cat statue and the surrounding environment creates an interesting and visually appealing composition.
Remark: It is not error-free. There is no knife in the paw, it is a fold of the coat.
Conclusion
The integration of quantized LLMs like LLaMA in image management is a transformative development, making advanced digital asset organization accessible on standard hardware and ensuring data privacy. This approach represents a significant leap forward, offering a powerful, user-friendly solution for a wide range of users and scenarios.
*Note: For those interested in exploring this solution further or seeking assistance with similar challenges, consultancy services are available. These services provide expertise in integrating and customizing such technologies to suit a variety of needs and preferences. Feel free to contact me at hello∂c-7.de. Claus Siebeneicher *
Excel upload to OpenProject
Plesk Fwd: Unable to configure a web server on the host
> Subject: Unable to configure a web server on the host xxxx > > Unable to generate the web server configuration file on the host <> because of the following errors: > > Template_Exception: AH00526: Syntax error on line 24 of /etc/apache2/modsecurity.d/rules/tortix/modsec/tortix_waf.conf: > ModSecurity: failed to load IPs from: /etc/asl/whitelist Could not open ipmatch file “/etc/asl/whitelist”: No such file or directory > > file: /opt/psa/admin/plib/Template/Writer/Webserver/Abstract.php > line: 75 > code: 0 > > Please resolve the errors in web server configuration templates and generate the file again.
Keine Verbindung mit Safari oder anderen
mobil
Chat-GPT: Beyond Copywriting, Storytelling and Excel. The Reincarnation of Scripting?
Those who know me are aware of my passion for photography. It brings me joy when others show interest in my pictures. And I want to present (and sell) them. Visualizing how the colorful images would look on walls is truly exciting.
To that end, I have created various platforms, from traditional photo gallery views to augmented reality and virtual exhibitions. However, what was missing were simple showrooms that demonstrate how my pictures would appear in decorated spaces.
I have been involved in the early days of personal computing, and I never lost touch with programming. While I may be a bit rusty in certain areas, I still possess knowledge in shell scripting, C, and Python, albeit not as proficiently as before. I might not be aware of the latest libraries for Python and have to look up methods of OpenCV constantly. Nevertheless, it suffices for my personal needs and I have the advantage to look at problems holistically. I always have more ideas than I can implement or even try out. However, implementation is often time-consuming and sometimes tedious.
And this is where Chat-GPT comes into play.
I have been following the development of GPT-J for around two years, utilizing it for storytelling purposes.
But Chat-GPT is a dream come true: finally, I can focus on my ideas without the hassle of being held up too much with implementation.
Chat-GPT as a Programming Assistant and Freelance Programmer
One of my previous jobs involved developing system architectures and as part of this translating operational requirements into technical system specifications, this comes in handy now.
An Experiment with Chat-GPT
Objective
Presenting my images from the “EuropasFarben” series as captivating reels on social media.
Task
The idea is to showcase selected images in various room templates and create a video clip that can be shared on social media.
Starting Point
I already have room photos and, of course, the EuropasFarben images. I manually determined the positions where the EuropasFarben images should be placed within each room template.
Now I need the following steps
- Selecting the images that best complement each room.
- Embedding these images into the room templates.
- Compiling a video clip from the results (guideline: display each image for 0.2 seconds, except for the last one, which should be shown for 5 seconds).
Implementation
To achieve this, I had Chat-GPT write three scripts: two Bash scripts and one Python program.
The first script identifies the three most suitable (in terms of color and pattern) rooms for each image. Then, the room template and the most fitting image are combined to create a new image. Finally, a specific number of images are compiled into a video clip.
All of this is automated, allowing hundreds of images to be processed within seconds. Changes, such as adding new rooms or adjusting the presentation frequency, can be easily made at any time.
With Chat-GPT as my programming assistant, I now have the ability to bring my ideas to life without significant effort and present my images in an impressive manner.
The CHAT-GPT Starting points
Image Overlay with ImageMagick
Write a script in zsh that takes two input images (referred to a base_image and overlay_image) that inserts overlay_image into base_image at position_x and position_y (in pixel) using imagemagick / convert. The dimensions (in pixel) of base_image remain unaltered, overlay_image shall be scaled to with_x or height_y (whichever fits best) without changing the aspect ratio. base_image, overlay_image, position_x, position_y, with_x and with_y shall be command line parameters. Ask if more information is needed.
Find Color-Harmony Pictures
Write a script that reads the photo of a room and a directory with possible images that are candidates to be hung in this room. Scan through all the files of images in this directory and find the 10 best matching pictures for this room applying color harmony rules
Create Video from Images
(this is already a revised version of the requirements document, the first attempt is lost). Create a python script to create a video from images: Random Image Selection: The script randomly selects a specified number of images from a given folder. The images can be in either JPG or PNG format. Image Display Duration: Each image is displayed in the video clip for a specified duration, except for the last image which is displayed for a longer duration. Last Image Display: The last image in the clip is displayed for 5 seconds to provide emphasis. Filename Display: Two seconds after the display of the last image starts, the filename of the last image (without the extension) is displayed at the bottom of the video clip. Readability Enhancement: A tray background is added behind the filename text to improve its readability. CLI Configurability: The script allows the user to configure various parameters through command-line arguments. The supported arguments include the image folder path, output file path, duration for each image display, and the number of images in the clip. Exclusion of Last Image: The script ensures that the last image shown in a previous run is excluded from the current run to avoid repetition. Output Format: The final video clip is saved in MP4 format using the H.264 codec.
Half an afternoon for iterations and production
With these basic requirements we (me and CHAT-GPT) went into a few iterations the SCRUM style, quickly reworked some issues and at the end of the afternoon I had 356 video clips ready. The final requirements are summed up below.
The Chat-GPT end points
Documentation
For the final version of each script, CHAT-GPT is asked to: Output the functionality of this script that it can be used as requirements document for re-creating it.
Certainly! Here’s the script functionality presented in a format that can serve as a requirements document for re-creating it:
Image Overlay Script – Requirements Document
Objective
The objective of this script is to overlay an image onto another image while applying a frame and shadow effect.
Features
- Accept two input images: a base image and an overlay image.
- Allow specifying the position coordinates and dimensions of the overlay image.
- If position coordinates and dimensions are not provided as command-line arguments, attempt to read them from a text file with the same filename as the base image but with the extension ‘txt’.
- Scale the overlay image while maintaining its aspect ratio to fit within the specified dimensions.
- Add a 20-pixel wide dark gray frame to the overlay image.
- Apply a shadow effect to the framed overlay image.
- Generate the output filename based on the overlay and base image filenames.
- Save the resulting composite image in a “compositions” directory.
- Clean up temporary files after the operation is complete.
Dependencies
- ImageMagick: The script relies on the ImageMagick command-line tool for image manipulation operations. Ensure ImageMagick is installed on the system before running the script.
Usage
The script accepts the following command-line parameters:
- Base image: The path to the base image file (JPEG format).
- Overlay image: The path to the overlay image file (PNG format).
- (Optional) Position X: The X-coordinate position of the overlay image on the base image (in pixels).
- (Optional) Position Y: The Y-coordinate position of the overlay image on the base image (in pixels).
- (Optional) Width X: The desired width of the overlay image (in pixels).
- (Optional) Width Y: The desired height of the overlay image (in pixels).
If any of the position and dimension parameters are not provided, the script attempts to read them from a text file with the same filename as the base image but with the extension ‘txt’.
The output file will be saved in the “compositions” directory with a filename based on the overlay and base image names.
To run the script, use the following command:
./overlay_script.sh base_image.jpg overlay_image.png [position_x position_y with_x with_y]
Limitations
- The script assumes the input images are in the JPEG and PNG formats. Adjustments may be necessary for other image formats.
- It is the user’s responsibility to ensure the input images have compatible dimensions and formats for proper overlaying.
- The script relies on ImageMagick, so it must be installed and accessible in the system’s PATH for the script to work correctly.
Future Enhancements
- Provide additional options for adjusting the frame and shadow parameters.
- Support more image formats and allow the user to specify the output format.
- Implement error handling and validation for better user feedback.
By following these requirements, one should be able to re-create the image overlay script with the desired functionality.
Room Image Matching Script – Requirements Document
Introduction
The Room Image Matching script is designed to find the best matching rooms for a given candidate image or directory of candidate images based on color harmony and contrast. It aims to assist in the selection of suitable room images that match the color scheme of a given candidate image.
Features
- Accepts a candidate image or directory of candidate images as input.
- Accepts a directory containing room images.
- Calculates color histograms for both the candidate image(s) and room images.
- Computes color harmony and contrast scores for each room based on the candidate image(s).
- Identifies the top matching rooms based on the scores.
- Prints the filenames of the best matching rooms and their corresponding scores.
- Copies the candidate image(s) to the respective room directories.
Usage
The script can be executed using the following command-line syntax:
python room_image_matching.py candidate_path room_directory [--num_rooms NUM_ROOMS]
Arguments
candidate_path
: Path to the candidate image or directory containing candidate images.room_directory
: Directory containing the room images.--num_rooms NUM_ROOMS
(optional): Number of top matching rooms to find (default: 10).
Algorithm
- Read the candidate image or scan through all images in the candidate directory.
- Calculate the color histograms for the candidate image(s) and room images.
- For each candidate image, compare the color histograms with the room images to calculate color harmony and contrast scores.
- Combine the scores to determine the overall suitability of each room for the candidate image.
- Select the top
num_rooms
rooms with the highest scores as the best matching rooms. - Print the results, showing the filenames of the best matching rooms and their corresponding scores.
- Copy the candidate image(s) to the respective room directories in a subdirectory named ‘candidates’.
Dependencies
The script relies on the following dependencies:
- OpenCV
- NumPy
- scikit-image
- scikit-learn
Ensure that these dependencies are installed before running the script.
Example Usage
To find the best matching rooms for a single candidate image:
python room_image_matching.py path/to/candidate_image.jpg path/to/room_images
To find the best matching rooms for all images in a candidate directory:
python room_image_matching.py path/to/candidate_directory path/to/room_images
The script will display the top matching rooms for each candidate image and copy the candidate images to the corresponding room directories under the ‘candidates’ subdirectory.
That covers the functionality of the Room Image Matching script as a requirements document. You can use this document to re-create the script accurately or refer to it for understanding the script’s purpose and behavior.
make_clip_con.py
Requirements
Description
The make_clip_con.py
script is designed to create a video from a specified number of random images selected from a given folder. The script provides flexibility in terms of image selection, display duration, and output configuration.
Functional Requirements
- Random Image Selection
- The script shall randomly select a specified number of images from a given folder.
- The images can be in either JPG or PNG format.
- Image Display Duration
- Each image shall be displayed in the video clip for a specified duration.
- The duration for each image display can be configured.
- Last Image Display
- The last image in the clip shall be displayed for a longer duration to provide emphasis.
- The duration for the last image display shall be configurable.
- Exclusion of Last Image
- The script shall keep track of the last image shown in a previous run.
- The last image shall be excluded from the current run to avoid repetition.
- Filename Display
- Two seconds after the display of the last image starts, the filename of the last image (without the extension) shall be displayed at the bottom of the video clip.
- The filename display shall be readable and distinguishable.
- Output Format
- The final video clip shall be saved in MP4 format using the H.264 codec.
- CLI Configurability
- The script shall allow the user to configure various parameters through command-line arguments.
- Supported arguments shall include the image folder path, output file path, duration for each image display, and the number of images in the clip.
- Configuration Persistence
- The script shall use a configuration file to store the state and settings of the application.
- The configuration file shall be created if it doesn’t exist.
- The configuration file shall store information such as the last images shown to avoid repetition.
Non-Functional Requirements
- Reliability
- The script shall handle various scenarios, such as insufficient number of images in the folder, gracefully and provide appropriate feedback to the user.
- Usability
- The script shall provide clear and concise usage instructions for running the program.
- The command-line arguments and their functionality shall be documented.
- Portability
- The script shall be written in Python to ensure cross-platform compatibility.
- The script shall run on major operating systems, including Windows, macOS, and Linux.
You can use this requirements document as a guide for re-creating the make_clip_con.py
script, ensuring that all the necessary functionality and features are implemented according to the specified requirements.