Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask question.(5)

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

ITtutoria

ITtutoria Logo ITtutoria Logo

ITtutoria Navigation

  • Python
  • Java
  • Reactjs
  • JavaScript
  • R
  • PySpark
  • MYSQL
  • Pandas
  • QA
  • C++
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Python
  • Science
  • Java
  • JavaScript
  • Reactjs
  • Nodejs
  • Tools
  • QA
Home/Questions/The easy way to handle error - duplicate 'row.names' are not allowed.
Next
Answered
Everly Miller
  • 11
Everly Miller
Asked: May 18, 20222022-05-18T21:35:30+00:00 2022-05-18T21:35:30+00:00In: r

The easy way to handle error – duplicate ‘row.names’ are not allowed.

  • 11

As advised, I used some code samples in another forum, but it could not improve the problem. My question is the “duplicate ‘row.names’ are not allowed” in R – how to solve it? The command line is:

StartDate, var1, var2, var3, ..., var14
systems <- read.table("http://getfile.pl?test.csv", header = TRUE, sep = ",")

and the result:

duplicate row.names are not allowed

What does the message mean? Can you advise me to fix it? If you have other better answers, leave them in the answer box below.

duplicate row.names
  • 2 2 Answers
  • 265 Views
  • 0 Followers
  • 0
Answer
Share
  • Facebook
  • Report

2 Answers

  • Voted
  • Oldest
  • Recent
  • Random
  1. Best Answer
    lyytutoria Expert
    2022-06-22T06:42:49+00:00Added an answer on June 22, 2022 at 6:42 am

    The cause:

    After looking over your problem, I find that except for the header row, each row in the file ends with a comma. This makes R misunderstand that the row names are in the 1st values column.

    Solution:

    The easiest way to solve this error is require read.table not to apply row.names:

    systems <- read.table("http://getfile.pl?test.csv",
    header=TRUE, sep=",", row.names=NULL)

    You can see all your numbered rows after doing this.

    Then let’s check out read.csv, a wrapper for read.table that has the sep=',' and header=TRUE arguments configured previously, making your call easier to:

    systems <- read.csv("http://getfile.pl?test.csv", row.names=NULL)
    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report
  2. Liam Raymond
    2022-05-25T21:20:18+00:00Added an answer on May 25, 2022 at 9:20 pm

    The related question highlights a section of ?read.table documentation that answers your problem.

    If there is a header and the first row contains one fewer field
    than the number of columns, the first column in the input is used
    for the row names. Otherwise if row.names is missing, the rows are numbered.

    The header row is likely to have 1 less column than the rest of your file. Therefore read.table assumes the first column is the one.names (which must be unique), and not a column (which may contain duplicated value). This can be fixed by one of these two solutions:

    1. Add a delimiter (ie, \t or , to the front or the end of your header row within the source file.
    2. Remove trailing delimiters from your data

    Your data structure will determine which option you make.

    Example: The header row is taken to mean that there is one less column than the data, because the delimiters aren’t compatible:

    v1,v2,v3 # 3 items!!
    a1,a2,a3, # 4 items
    b1,b2,b3, # 4 items

    This is how it’s interpreted by default

     v1,v2,v3 # 3 items!!
    a1,a2,a3, # 4 items
    b1,b2,b3, # 4 items

    The values of the first column (without headers) are interpreted as row.names a1 or b1. This error can be caused by duplicates in this column, which is possible.

    You can set row.names = FALSE to make the shift not occur, but there is still a mismatch between the items in the header, and the data, because the delimiters aren’t compatible.

    Solution 1 Add trailing separator to header

    v1,v2,v3, # 4 items!!
    a1,a2,a3, # 4 items
    b1,b2,b3, # 4 items

    Solution 2 Removing trailing delimiter in non-header rows

    v1,v2,v3 # 3 items
    a1,a2,a3 # 3 items!!
    b1,b2,b3 # 3 items!!
    • 12
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

Sidebar

Ask A Question

  • How to Split String by space in C++
  • How To Convert A Pandas DataFrame Column To A List
  • How to Replace Multiple Characters in A String in Python?
  • How To Remove Special Characters From String Python

Explore

  • Home
  • Tutorial

Footer

ITtutoria

ITtutoria

This website is user friendly and will facilitate transferring knowledge. It would be useful for a self-initiated learning process.

@ ITTutoria Co Ltd.

Tutorial

  • Home
  • Python
  • Science
  • Java
  • JavaScript
  • Reactjs
  • Nodejs
  • Tools
  • QA

Legal Stuff

  • About Us
  • Terms of Use
  • Privacy Policy
  • Contact Us

DMCA.com Protection Status

Help

  • Knowledge Base
  • Support

Follow

© 2022 Ittutoria. All Rights Reserved.

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.