But if you want to learn about this webtool
Let’s try and employ some APIs and web-scraping in the are of legal research and data collection. This Decisions Crawler Tool for Nepal’s Courts is a tool that streamlines the process of accessing and compiling legal decisions from the courts of Nepal – Supreme Court, High Courts and District Courts.
At its core, this tool provides a user-friendly web-based interface. The design is both intuitive and informative, making it accessible to a wide range of users, from seasoned experts to those with limited technical skills.
The tool operates by scraping decisions made by the courts in Nepal, including the Supreme Court, High Courts, and District Courts. Users can input specific decision dates or case registration dates, allowing them to retrieve data with precision. One of the features of the tool is the option to select which courts to include in the search. Users can choose to compile decisions from the Supreme Court, High Courts, District Courts, or Special Courts, tailoring their research to their specific needs.
Automating the process
The Decisions Crawler Tool automates what was once a manual and arduous process. It searches through the web to locate relevant decisions and compiles them into a structured format, saving researchers valuable time. An essential aspect of the tool is its ability to retrieve links to Supreme Court decisions. This functionality enhances the depth of the research by providing easy access to the original judgments, enabling users to explore cases further.
Advantages of this Crawler Tool
By automating the data collection process, the tool drastically reduces the time required to compile court decisions. Researchers can focus on analysis and interpretation rather than data gathering. The tool’s web-based interface ensures accessibility from anywhere, making it a cross platform resource. Users can tailor their searches to specific courts and dates, ensuring that the tool delivers precisely the information they need.
The technical nitty-gritty
HTML Front-End: The HTML part provides the user interface for interacting with the tool. It offers input fields for specifying decision dates, selecting the type of courts to include (Supreme Court, High Courts, District Courts, or Special Courts), and an option to treat dates as Darta Dates.
Async Functions: The code defines an async function for fetching Supreme Court links based on registration numbers (getSupremeCourtLink). This is used to extract links to the Supreme Court decisions.
Event Listeners: An event listener is set up for the “Click here for results” button. When clicked, it collects user input, such as decision dates and court type selections.
Scraping Multiple Dates: It allows users to input multiple decision dates, and for each date, it scrapes decisions from the selected court types.
Progress Bar: A progress bar is used to display the progress of the data retrieval process.
Output Handling: The scraped data is displayed in the HTML output section, along with any error messages.
PHP Back-End (scrape.php): The PHP component acts as a proxy for making HTTP POST requests to court websites. It accepts POST requests with court type, court ID, Darta Date, and Faisala Date parameters, and it sends the request to the respective court website.
courtmap.jsfile contains a mapping of court types and court IDs to their human-readable names. This mapping is used to identify the courts and display their names in the tool’s output.
If you’re interested in obtaining the source code, please don’t hesitate to reach out. Having access to the source code can be incredibly beneficial, whether you want to explore and learn from it, adapt it to your specific needs, or collaborate on further development.