I keep being asked about some of the technology used behind Vigay.com, especially as the site is often used as a test bed for developing new PERL scripts and coding techniques. I work as a professional PERL/Web developer, and I'm always striving to improve and optimise things.
I also don't like 'borrowing' other peoples or 'off the shelf' scripts, as a) they never do quite what you want them to do and b) They're not as optimised and efficient as I think they could be, my motto being "If you want something done properly, do it yourself!".
Originally I used Google for the search-engine, but after reading much negative publicity about Google storing peoples search results and passing it onto the FBI if relevant, I decided to write my own search engine, which has evolved into the one you see in the top right corner.
It consists of a 12K PERL script which can scan multiple directories, automatically skipping non textual files (like file downloads and images). It has a lookup of popularly misspelt words, so can ask users if they actually meant something else if a misspelling is entered. There is also a database of related items so for example, if someone searches for tapes, my search engine might ask them if they meant recording.
If nothing matches a search request, the server automatically emails me to say a search was unsuccessful. This way I can improve the website by perhaps writing a specific article to cover a topic people search for frequently.
You will notice that all of my downloadable software has a download link which automatically takes you to a download page, notifying you of what it's doing, along with incrementing a download counter - so that I can see how many times each application has been downloaded.
Again the script is written in PERL and is very short (<3K) and does both functions of handling the download as well as displaying how many times a particular file has been downloaded. The relevant function is determined by a single SSI on the relevant web page.
To save modifying or updating loads of HTML code when I add new collective nouns I decided to have a single, flat-form database of nouns, then write a script to interrogate and display selected genres, generating the relevant HTML on the fly as people view or search the database. Again, very small (<3K) and compact.
On the main homepage there is a small script (it was two, but I decided to combine the two into a single SSI) to pick a random 'thought for the day' and also interrogate a database of events, birthdays, deaths etc in order to pick four or five events which happened on this day in history.
With the demise of the original webring, I decided to write my own script, which weighs in at just 18K of PERL code and manages the entire thing, from generating informational pages to handling membership requests, generating webring HTML segment code and also running the webring itself. More information is available here.
At the bottom of each page is a little 'stats' script which automatically adds the 'Email this page to a friend' link as well as showing when the page was last updated and how many times it's been accessed. This is performed by a small (4K) 'stats' script which automatically reads the last modified information from the file saved on the server, calculated how many days ago it was and increments a counter each time the page is viewed.
Another script can scan the database of page counts and automatically generate a list containing the top X most popular files. I've designed the script for maximum flexibility so that the same script can generate a top ten downloads or a top thirty most popular articles on the entire site. By changing a variable, I can select how many will be in the 'top ten', so I could just as easily have the top 50 articles etc.
I stole this idea from a number of other sites which offer an email service whereby if you find an interesting article you can recommend it to a friend by sending them an automated email with the link. As additional security, I've implemented a random anti-spam verification code which will prevent automated spambots from misusing this service.
The script will take any email address and email a short link pointing to the article, along with a brief line of description which is automatically extracted from the relevant article. The script is entirely automated so it doesn't need any parameters and can automatically detect the article reference when someone clicks on the 'email' link.
I also wrote a script which will calculate a persons biorhythm cycles. Again in PERL and just 6K long.
Last edit: 9th Feb 2018 at 1:54am
Viewed 5704 times since 9th Dec 2006,