Your guide to getting data entry done for your business
Data entry is an important task, but choosing the wrong solution can seriously harm your company's productivity.
Data Extraction is the process of extracting data from a variety of sources for further analysis. A Data Extractor is someone who helps businesses and organizations gain insight from their data and create descriptive and predictive models. They specialize in finding patterns and relationships that guide decisions and uncover meaningful information. Through carefully crafted queries and processes, our Data Extractors can transform raw data into a useful format that can be used for reporting, analytics, machine learning and more.
Here's some projects that our expert Data Extractors made real:
When you partner with an experienced team of Freelancer's Data Extractors you can access valuable insights from your data that can guide decisions, uncover opportunities and create predictive models with new data sources. Our experts can help you unlock deeper insights with advanced filtering methods and complex coding. Explore the full range of possibilities with our talented community of professionals, capable of delivering comprehensive solutions tailored to your needs.
Ready to launch your very own project on Freelancer.com? We invite you to try us out and hire our experienced Data Extractors to make your design goals a reality. Let their creativity, skill, and proficiency bring something special to your project!
De 129,630 opiniones, los clientes califican nuestro Data Extractors 4.9 de un total de 5 estrellas.Data Extraction is the process of extracting data from a variety of sources for further analysis. A Data Extractor is someone who helps businesses and organizations gain insight from their data and create descriptive and predictive models. They specialize in finding patterns and relationships that guide decisions and uncover meaningful information. Through carefully crafted queries and processes, our Data Extractors can transform raw data into a useful format that can be used for reporting, analytics, machine learning and more.
Here's some projects that our expert Data Extractors made real:
When you partner with an experienced team of Freelancer's Data Extractors you can access valuable insights from your data that can guide decisions, uncover opportunities and create predictive models with new data sources. Our experts can help you unlock deeper insights with advanced filtering methods and complex coding. Explore the full range of possibilities with our talented community of professionals, capable of delivering comprehensive solutions tailored to your needs.
Ready to launch your very own project on Freelancer.com? We invite you to try us out and hire our experienced Data Extractors to make your design goals a reality. Let their creativity, skill, and proficiency bring something special to your project!
De 129,630 opiniones, los clientes califican nuestro Data Extractors 4.9 de un total de 5 estrellas.i want a python developer with high experience in python seleniem specially in undetected chrome driver, i want to handle clouldflare verification you are not robot
I have fewer than ten PDF documents, each laid out a little differently, and I need every piece of mixed text and numeric information transferred accurately into a single Google Sheet. Because the layouts are inconsistent, simple bulk-import tools won’t work; each file will have to be reviewed so the right cells line up with the right columns. Here is what I’m expecting from you: • Create or adapt a reliable method—manual, scripted, or a blend of both—to pull every required value from each PDF. • Populate my Google Sheet so all records sit in tidy, clearly labeled columns. • Double-check totals, dates, and text strings for accuracy before handing the sheet back. I will provide the PDFs and a basic column template the moment we start. The proje...
I need a lightweight, repeatable scraper that gathers every publicly visible customer review talking about Bayer from social-media sources—right now the focus is on Goole. The crawler should pull the full review text, star rating (or reaction score, if available), reviewer name or handle, date, and the direct URL to each post. Please build it so I can run it on demand, ideally from a simple command line or Jupyter notebook. Python with requests / BeautifulSoup, Selenium, or Scrapy is fine; if you prefer another stack, let me know why it would be a better fit. Deliverables • Clean, well-commented source code • One sample export in CSV or JSON showing at least 100 live reviews • A short README explaining environment setup, run instructions, and how to alter s...
I have a ten-page PDF that mixes narrative text with numbers, all presented as bullet-style lists. I need the entire document reproduced in Excel, exactly as it appears. Here’s what matters: • For every bullet list, keep the items together in one column and preserve line breaks so the order reads the same way it does on the page. • Make sure each figure stays paired with its text label—no shifting or misalignment. • Copy everything faithfully; don’t redesign tables or alter wording. Deliverable • A single, neatly formatted .xlsx file that mirrors all ten PDF pages. I’ll spot-check the sheet against the source for accuracy, so aim for 100 %. Please name the file to match the original and return it within 24 hours of acceptance.
I need an expert who can tap into our Lawson environment and pull the financial-side payroll information we keep there. The end goal is a clear, well-structured Excel workbook—no other format will do—containing every payroll field we agree on (earnings, deductions, taxes, cost center coding, and any other columns you know are standard for Lawson’s payroll tables). Here is what matters to me: • Source: Lawson (on-premise LSF, version 10) • Scope: Financial data tied specifically to payroll—nothing HR-only or attendance-related. • Delivery: A single Excel file, cleanly formatted and ready for pivoting or upload into Power BI. You may use whatever extraction method you are most comfortable with—Lawson SQL queries, the Lawson API, or an ETL...
I have roughly 5,000 DEF 14A proxy statements in HTML format and I need the key compensation details for each named executive pulled out and placed into a clean, structured file. The fields I must end up with are: base salary, stock options and awards, bonuses / incentive pay, plus any other compensation figures that appear in the summary or grants tables. Because the data are scattered in both narrative text blocks and embedded HTML tables, a purely scripted scrape misses too much, while a purely manual effort would be too slow. I’m therefore looking for a balanced workflow that blends solid Python-based parsing (BeautifulSoup, pandas, regex, maybe an LLM call for tricky passages) with targeted human review to catch formatting quirks and footnotes. Deliverables • A single C...
Saya ingin memulai karier freelance sebagai penulis konten teknologi, khususnya membuat tutorial dan panduan yang mudah dipahami pembaca awam hingga menengah. Saya mencari seorang mentor berpengalaman yang bersedia membimbing saya dari nol hingga mampu menghasilkan artikel berkualitas dan layak terbit. Apa yang saya butuhkan: • Penjelasan langkah-demi-langkah tentang cara meriset topik teknologi, menyusun outline, lalu menulis tutorial yang runut serta informatif. • Panduan praktis dasar SEO on-page agar artikel mudah ditemukan di mesin pencari. • Contoh kerangka artikel beserta studi kasus penulisan (boleh dari proyek Anda sebelumnya). • Umpan balik terperinci atas draft yang saya buat, termasuk koreksi gaya bahasa, struktur, dan kejelasan teknis. • Sar...
I have a set of PDFs with multiple tables packed onto each page. I need every one of those tables transcribed into a single Google Sheet, preserving the column order exactly as it appears. No formulas are required; straight data entry is the priority. Accuracy matters more than speed—totals must match, and no rows can be missed, even when the table breaks across pages. I will share: • The source PDFs • A blank Google Sheet with a tab layout that mirrors the table names in the PDFs What I expect back: • All data cleanly entered, one row per PDF row • Consistent use of dates, numbers, and text exactly as shown • A quick comment on any illegible figures so I can verify them If you are comfortable reading dense PDFs and keeping data perfectly align...
We need a researcher to locate and collect reports published on websites across Europe. Roughly 300 sites to search. Requirements: Fluent/native speakers prioritized for: Danish, Finnish, Norwegian, Swedish, Slovenian, Slovak, Italian, French, and Spanish. Accurate link sourcing and brief metadata (title, publication date, source URL). No special software required — just internet access and a browser. Please state which language(s) you’re fluent in and your estimated turnaround time.
I have a collection of PDFs containing tables that must be transcribed into a single Google Sheet. Each table holds both text and numerical values, and while many files have just one table, a portion include several that need to be captured separately yet placed in the same worksheet for easy consolidation. Accuracy is critical: every label, figure, and formatting nuance in the source tables should appear exactly the same in the sheet. Keep column order and headings consistent so downstream formulas run without rework. For files with multiple tables, please insert a blank row between each set so I can quickly distinguish them later. I will share the PDFs and an empty Google Sheet with the required header row. When finished, simply notify me—no extra macros or scripts are necessary;...
I need a Python-based scraper that pulls complete car-listing information from every day. At a minimum the script has to capture make, model, price, and mileage but, in practice, I want every publicly visible field on each listing so that nothing useful is missed. Here’s what matters to me: • Reliability – the code must navigate pagination, work around basic anti-bot measures (rotating user-agents / respectful delays), and throw clear errors if the site layout changes. • Clean output – save to CSV or an SQLite database with consistent column names, ready for later analysis. You’re free to choose libraries you trust (requests, BeautifulSoup, Selenium, Scrapy, Playwright, etc.); just document any setup steps and keep third-party dependencies to a mi...
We are seeking a reliable individual based in South Korea to assist with a one-time task involving access to official company registry information through a local government portal. Task Description The selected person (PTR) will support us in accessing and retrieving registry information for a specific company via the official South Korea registry office portal: Key Requirements: * Must be local residence and located in South Korea * Must have access and is familiar with the data retrieval process via the portal. The document expense will be fully reimbursed, and you will also receive an additional USD 5 as an effort fee.
I need automation in Excel from my travel portal xls file. need to extract client name, booking id flight number, pnr, amount. in case of any cancellation or any ammendment
I have a backlog of more than fifty PDF files that hold mixed text and numerical information, laid out in roughly six to ten columns per page. All of that content needs to end up in a single, well-structured Google Sheet, organised exactly as it appears in the source documents. Here is what I need from you: • Every row and column transferred faithfully, keeping headings, units and number formats intact. • One consolidated Google Sheet, neatly formatted and share-ready. • A quick cross-check on totals or obvious outliers so the final sheet is error-free. Speed is welcome, but accuracy is essential; I would like to spot-check at least 99 % correctness before signing off. If you already work with tools such as Google Workspace, Adobe Acrobat, or OCR utilities to streamline da...
I need a quick data-grab from my FootyStats account. Once I share the login details and the exact list of leagues, simply navigate to each competition and download every available Team statistics spreadsheet. It should come to roughly 250 individual CSV/XLSX files. Only team stats are required—skip the match or player datasets. After all files are downloaded, place them in one organised folder, compress it into a single ZIP archive, and send me the download link or attach it here. Confidentiality is important, so please handle the credentials securely and delete them once the job is done. As soon as I receive and verify the ZIP (correct leagues, no corrupt files), the task is complete.
I need a quick data-grab from my FootyStats account. Once I share the login details and the exact list of leagues, simply navigate to each competition and download every available Team statistics spreadsheet. It should come to roughly 250 individual CSV/XLSX files. Only team stats are required—skip the match or player datasets. After all files are downloaded, place them in one organised folder, compress it into a single ZIP archive, and send me the download link or attach it here. Confidentiality is important, so please handle the credentials securely and delete them once the job is done. As soon as I receive and verify the ZIP (correct leagues, no corrupt files), the task is complete.
I need to set up a reliable routine that pulls our sales records from CRM and outputs them as standards-compliant files that match a simple schema I will supply. The assignment is strictly about getting data out of CRM. no importing or transformation inside the platform—so the focus is on clean extraction, correct field mapping, and well-structured XML. If you have questions about the schema or the target environment, let me know upfront so we can keep the turnaround tight.
I need a Python-based solution that automatically gathers companies and shareholders data, pulls supplementary details via external APIs, and outputs a clean, unified dataset I can query at any time. Scope of the scrape • Sources: company websites, financial databases and relevant public records. • Website focus: company profiles, turnover figures and any available Demat / share-holding particulars. What the tool should do 1. Crawl or call the above sources, respecting and rate limits. 2. Parse the required fields, normalise names and IDs, then enrich each record through one or more APIs (for example OpenCorporates, Clearbit or any better suggestion you have). 3. Store results in a structured format (CSV plus an SQLite or Postgres option). 4. Offer a simple comma...
I have between 11 and 50 PDF files that share an identical layout, and I need every field—both text and numerical values—transcribed accurately into a single Google Sheet. Because the template never changes from file to file, once the first row is mapped the rest should flow quickly; what matters most to me is flawless accuracy, consistent formatting, and preservation of any leading zeros or special characters that appear in the PDFs. You’ll receive the PDFs together with a sample Sheet that shows exactly where each column should go. When you’re finished, I should be able to cross-check totals and spot-check random rows without finding discrepancies. Deliverable • One Google Sheet containing all records from every PDF, formatted to match the sample and read...
I need a reliable script or windows-application that automatically gathers text content from specified websites and online databases, then saves everything into a clean, well-structured CSV file. A Windows-software would be preferred. The crawler should be able to crawl the website and spider a list of urls for approval or automatically go through the website Or just scrape a given list of urls (from a txt-file) Key details • Sources: public-facing websites and shops (also with login using username:password) • Data type: text only—no images or binary files. • Output: one CSV per run, UTF-8 encoded, with a header row • should be able to read/exrtract data from !! various shops & websites !! -> generally i need a basic software + "plugins" fo...
I have to confirm whether specific street addresses qualify for the government-funded home-insulation programme. The Energy department website holds an “address eligibility” checker, and I need that information pulled automatically rather than re-typing each location by hand. Your task is to build and run a scraper that goes through the same steps the public tool requires, captures the eligibility result for every address I supply, and returns the full set in a clean Spreadsheet (Excel or CSV is fine). A repeatable script—Python with requests / BeautifulSoup or Selenium, or any language you are comfortable with—is preferred so I can rerun it later when the list of addresses grows. Handle captchas or session cookies if the site uses them, and respect polite crawlin...
We are looking for an experienced developer who can build an automated system to extract daily newly incorporated company data from the MCA (Ministry of Corporate Affairs) website – https://www.mca.gov.in. The system should automatically collect and deliver the list of companies incorporated each day in structured format (Excel / CSV / API / Database). Scope of Work: Develop a web scraping or API-based solution to extract daily incorporated company data from the MCA portal. The tool should automatically fetch newly incorporated companies every day. Data should include the following fields (minimum): CIN Company Name Date of Incorporation ROC (Registrar of Companies) State Company Type (Private Limited / LLP / OPC / Public Limited) Authorized Capital (if available) Regist...
I have a batch of scanned documents whose text is crisp enough for reliable OCR extraction. I need that content transferred into a Google Sheet, keeping the same column-and-row layout that appears in the originals. Please use any OCR tool you trust (Adobe, Tesseract, Google Vision, etc.) to capture the text, then spot-check for accuracy before pasting it into the sheet. Deliverables • One Google Sheet mirroring the documents’ table structure • 100 % of the text transcribed, double-checked against the scans for typos or alignment errors Once the sheet matches the documents exactly, the job is done.
I have a collection of PDFs containing tables that must be transcribed into a single Google Sheet. Each table holds both text and numerical values, and while many files have just one table, a portion include several that need to be captured separately yet placed in the same worksheet for easy consolidation. Accuracy is critical: every label, figure, and formatting nuance in the source tables should appear exactly the same in the sheet. Keep column order and headings consistent so downstream formulas run without rework. For files with multiple tables, please insert a blank row between each set so I can quickly distinguish them later. I will share the PDFs and an empty Google Sheet with the required header row. When finished, simply notify me—no extra macros or scripts are necessary;...
We are looking for an experienced developer to build a robust web scraping solution capable of extracting structured data from a login-protected medical/drug repository website. The platform contains a large database of drug information (potentially hundreds of thousands to over a million pages). The scraper should be able to navigate through the website after login, systematically extract relevant drug data, and store it in a structured format. Scope of Work: Develop a scraper that can log into a protected website. Navigate through the drug repository pages. Extract structured information from each drug page. Handle pagination and large-scale crawling. Implement mechanisms to prevent crashes or interruptions during long scraping runs. Store extracted data in a structured format such as ...
I have a batch of scanned documents whose text is crisp enough for reliable OCR extraction. I need that content transferred into a Google Sheet, keeping the same column-and-row layout that appears in the originals. Please use any OCR tool you trust (Adobe, Tesseract, Google Vision, etc.) to capture the text, then spot-check for accuracy before pasting it into the sheet. Deliverables • One Google Sheet mirroring the documents’ table structure • 100 % of the text transcribed, double-checked against the scans for typos or alignment errors Once the sheet matches the documents exactly, the job is done.
I have one PDF that contains a series of clearly formatted tables. I need every row and column from those tables transferred accurately into a Google Sheet, preserving the original layout, headings, and cell order. You will receive the PDF as a single file. Your job is simply to open it, copy—or, if you prefer, programmatically extract—the data, and paste or upload it into the sheet I will share. Accuracy is critical; I’ll be double-checking totals, column alignment, and that no rows are missed. Deliverable • A Google Sheet mirroring each table in the PDF, ready for immediate use and further analysis. Acceptance criteria • 100 % of table rows and columns captured. • Original table headings retained. • No extra spacing, merged cells, or forma...
I need a quick data-grab from my FootyStats account. Once I share the login details and the exact list of leagues, simply navigate to each competition and download every available Team statistics spreadsheet. It should come to roughly 250 individual CSV/XLSX files. Only team stats are required—skip the match or player datasets. After all files are downloaded, place them in one organised folder, compress it into a single ZIP archive, and send me the download link or attach it here. Confidentiality is important, so please handle the credentials securely and delete them once the job is done. As soon as I receive and verify the ZIP (correct leagues, no corrupt files), the task is complete.
I have a single website that lists venues and I need a clean spreadsheet pulled from it. Once we start, I will share the exact URL so you can inspect the structure before you begin. For every venue that appears on the site, I want these fields captured: • Venue name • Email address • Phone number • Full physical address Please scrape the entire catalogue—restaurants, event spaces, hotels or any other venue type the site includes—then deliver the data in CSV or Excel format with one row per venue and clearly labeled columns. I’m happy to answer any structural questions about the site up-front and will consider the job complete when the file imports without errors and sample checks match what’s live on the page.
De feed mag dagelijks draaien (maximaal één run per dag) en moet de data leveren in XML-formaat. De extractie moet binnen de grenzen van de verleende toestemming blijven (max. 5.000 producten totaal, geen structurele belasting). Gewenste output & structuur • Dagelijkse XML-feed (eenvoudige, schone XML-structuur; bij voorkeur RSS/Atom-compatibel of custom met <product>-elementen). • Update-frequentie: eenmaal per dag (bijv. ’s nachts). • Verplichte velden per product: • SKU-nr (artikelnummer / internal id / product-id) • Title (volledige productnaam / omschrijving) • Foto (directe URL naar productafbeelding, bij voorkeur hoogste resolutie) • Producttype (bijv. SINGLE_ARTICLE, multipack, bundle, …) • Ca...
Looking for an OpenBullet config engineer to build a config for a social media site. Please apply only if you have experience building OpenBullet configs and describe a couple of your projects built.
I need a reliable, repeatable script that automatically pulls historical and fresh match-result data for the Premier League, La Liga, Serie A, the English Championship and the Bundesliga 1. The workflow should: • visit publicly available sources you identify (official league sites, APIs, or reputable statistics portals), • extract the full-time score, date, home/away sides, venue and any metadata you can pick up (round, referee, attendance), • extract data on goals including the exact time and goalscorer • additional data extracted from match Commentary would be helpful, i.e. substitutions, shots on goal, shots off target, etc. with times will help • normalise club names so they are consistent across all leagues, and • write everything into a single, tidy...
I am preparing a full systematic review that maps all available evidence on intra-renal pressure in endourology. The manuscript must cover three key angles—how pressure is measured during endoscopic procedures, the clinical outcomes linked to different pressure levels, and the complications that arise (along with their management). To give the paper real depth, I want every relevant research design represented: clinical trials, observational studies, and case reports. The core of the work is a rigorous, reproducible search across the major medical databases (MEDLINE, Embase, Cochrane, Scopus) plus grey-literature checks. Peer-reviewed journal articles are my primary focus, but I am open to conference abstracts or book chapters whenever they fill a gap in the data landscape. After th...
All of my property information currently sits in a series of PDF documents and I need it transferred accurately into my Landlord Vision account. For this phase the focus is purely on Property details; tenant and financial records will be handled separately at a later date. The PDFs contain the usual mix of addresses, ownership information, room counts, EPC ratings and other management-related notes. Your task is to extract every relevant field and populate the corresponding sections inside Landlord Vision so that each property is fully set up and ready for day-to-day management. Deliverable • A complete Landlord Vision record for each property, with every available detail from the PDFs entered and double-checked for accuracy. Accuracy and consistency are critical because this dat...
I have a set of PDFs containing numerical tables that I need reproduced in Google Sheets with absolute accuracy. Each table should appear in the sheet exactly as it does in the source—same column order, same row count, and the original number formatting left untouched. No formulas, conditional formats, or extra styling are required; simple, clean data in standard cells is all I’m after. You will receive the PDFs and an empty Google Sheet. Please transfer every figure, double-check totals, and flag any unreadable characters so I can verify them. When finished, share the updated sheet and a brief note confirming the tables you completed and any anomalies you found. Accuracy and attention to detail are more important than speed, but a prompt turnaround is appreciated.
I have a set of PDFs containing numerical tables that I need reproduced in Google Sheets with absolute accuracy. Each table should appear in the sheet exactly as it does in the source—same column order, same row count, and the original number formatting left untouched. No formulas, conditional formats, or extra styling are required; simple, clean data in standard cells is all I’m after. You will receive the PDFs and an empty Google Sheet. Please transfer every figure, double-check totals, and flag any unreadable characters so I can verify them. When finished, share the updated sheet and a brief note confirming the tables you completed and any anomalies you found. Accuracy and attention to detail are more important than speed, but a prompt turnaround is appreciated.
I have a batch of scanned documents whose text is crisp enough for reliable OCR extraction. I need that content transferred into a Google Sheet, keeping the same column-and-row layout that appears in the originals. Please use any OCR tool you trust (Adobe, Tesseract, Google Vision, etc.) to capture the text, then spot-check for accuracy before pasting it into the sheet. Deliverables • One Google Sheet mirroring the documents’ table structure • 100 % of the text transcribed, double-checked against the scans for typos or alignment errors Once the sheet matches the documents exactly, the job is done.
I have a set of PDF documents that will be presented in court and I need clear, defensible confirmation of when each file was actually downloaded—not simply when it was first created or last modified. Your role is to extract and interpret all relevant metadata from these PDFs, explain the distinction between creation timestamps and download timestamps, and then put that explanation into an affidavit I can file with the court. Here’s what I need from you: • A thorough, tool-based metadata examination (EnCase, FTK, ExifTool, or similar are fine—whatever you are most comfortable defending under oath). • A concise forensic report that highlights the exact metadata fields proving the download date and shows how those fields differ from basic creation or modificati...
I'm looking for an experienced Epicor Kinetic professional to assist with generating database export reports. Key Requirements: - Proficiency in Epicor Kinetic - Expertise in report generation, for all manufacturing related activities Ideal Skills and Experience: - Experience with Epicor Kinetic database and report development tools - Ability to understand complex data structures and export requirements - Attention to detail and accuracy in report generation - Ability to work with client to determine what data is needed and how it will be used Please provide examples of similar work done and relevant qualifications.
Project Description: We are looking for a freelancer to help with a data research and filtering task using LinkedIn Sales Navigator and some online tools. Workflow: 1. I will provide LinkedIn Sales Navigator filter criteria to find companies. 2. Using those filters, you need to extract the list of companies from Sales Navigator. 3. For each company, collect: - Company name - LinkedIn company page - Website domain 4. Next, check the MX records of each website using the MX checker link that I will provide. 5. If the domain uses Google Workspace (Google MX records) → proceed to the next step. 6. Then check the BIMI record using the BIMI checker link I will provide. 7. If: - MX = Google Workspace - BIMI = Not enabled → Then collect: - CEO LinkedIn pr...
I have between one and five PDF bank statements that need to be transcribed into a single, well-structured Google Sheet. Each transaction must appear on its own row with the following columns preserved exactly as listed in the originals: Date, Description, Debit, Credit, and Running Balance. I’ll share the scans as soon as we start; you’ll open each file, read every line, and enter the figures accurately, double-checking as you go so totals line up with the source documents. Because all statements belong to the same account, you won’t have to separate or label multiple accounts—just keep the order and page sequence intact so the running balance flows naturally. Deliverable: • A Google Sheets file containing every transaction from the provided PDFs, formatt...
I have a curated list of specific company websites and I need an automated solution that extracts complete contact information from each one. The goal is to turn every URL into a clean, ready-to-use lead. WEBSITE : The scraper should capture: • Email addresses • Phone numbers • Mailing addresses • LinkedIn profile link • Location (city / state / country) • First and last name • Occupation / job title • Company name • Company website A well-structured CSV or Excel file is the preferred output, with each field in its own column. I am comfortable with your choice of tech—Python with BeautifulSoup, Scrapy, or Selenium are all fine—as long as the script runs reliably and respects and rate limits where required. Ac...
Por favor, regístrate o inicia sesión para ver los detalles.
Project Title: WhatsApp to Web Portal Automation (Python) - Multi-Recharge Distributor Project Description: I am looking for a developer to automate a repetitive task for my multi-recharge business. I am a distributor for a portal () and I currently manage retailer balance transfers manually via WhatsApp. Current Workflow: Retailers send a payment screenshot and a message via WhatsApp (Format: PAY [ID] [Amount]). I manually log in to the web portal or mobile app. I enter the Retailer ID and the Amount to transfer the wallet balance. I do not verify screenshots instantly; I manually verify bank statements at night. What I Need: I need a "Robot" or an automation script (using Python Selenium ) that can: Trigger: Read incoming WhatsApp notifications. Extract Data: Automatica...
I have a batch of PDFs that contain pure text only, and I need every word lifted accurately into Google so the content can be edited and searched later. You may use Google Docs, Google Sheets, or suggest the most suitable Google workspace tool; the goal is a clean, easily shareable file with the exact wording found in each PDF. Key points to keep in mind: • The source files are text-only—no tables or images to worry about. • Spelling, line breaks, and paragraph order must match the original. • I will provide a shared drive link for the PDFs and expect the completed Google file(s) returned in the same folder structure. If any text isn’t clear in the scan, leave a short comment where clarification is needed rather than guessing.
I need an experienced Python developer to build a commercial multi-client visa appointment automation system for Turkey-based applicants. Full source code ownership is required upon delivery. --- PROJECT BACKGROUND I run a visa consultancy service in Turkey. My clients need visa appointments from VFS Global portals. The current manual process is too slow and I need a fully automated, scalable system that handles 50-100+ clients simultaneously. --- TARGET PORTALS Source: Turkey () Target countries (minimum 6-7): - United Kingdom - Germany - France - Netherlands - Italy - Spain - Sweden System must be modular so new countries can be added later. --- CORE FEATURES REQUIRED [1] MULTI-CLIENT MANAGEMENT DASHBOARD - Register and manage 100+ client records - Per client: Full name, Passpo...
I need a comprehensive dataset of FMCG items that already includes each product’s bar code. You do not need to limit yourself to one category; feel free to pull records from food and beverages, personal care, household items, or any other fast-moving line you have reliable data for—the broader the coverage, the better. Please structure the final file so it can be imported directly into a database. I will adapt it to my own environment, so a clean, well-documented import format is essential (CSV wrapped in a SQL-ready script, JSON dump, or any other approach that drops smoothly into a relational or NoSQL store is fine as long as the mapping is clear). Key fields I must see per record: product name, brand, bar code (EAN/UPC), pack size or net weight, category, and MRP. Extra at...
Data entry is an important task, but choosing the wrong solution can seriously harm your company's productivity.
Learn how to hire and collaborate with a freelance Typeform Specialist to create impactful forms for your business.
A complete guide to finding, hiring, and working with a skilled freelance typist for your typing projects.